code stringlengths 4 4.48k | docstring stringlengths 1 6.45k | _id stringlengths 24 24 |
|---|---|---|
def __call__(self, results): <NEW_LINE> <INDENT> if 'flip' not in results: <NEW_LINE> <INDENT> flip = True if np.random.rand() < self.flip_ratio else False <NEW_LINE> results['flip'] = flip <NEW_LINE> <DEDENT> if 'flip_direction' not in results: <NEW_LINE> <INDENT> results['flip_direction'] = self.direction <NEW_LINE> ... | Call function to flip bounding boxes, masks, semantic segmentation
maps.
Args:
results (dict): Result dict from loading pipeline.
Returns:
dict: Flipped results, 'flip', 'flip_direction' keys are added into result dict. | 625941b230bbd722463cbb54 |
def get_cation_cn(self, radius=2.6, min_weight=10e-5, anions=None): <NEW_LINE> <INDENT> if anions is None: <NEW_LINE> <INDENT> anions = ['O2-', 'O', 'F-', 'F', 'Cl-', 'Cl', 'I-', 'I', 'Br-', 'Br', 'S2-', 'S'] <NEW_LINE> <DEDENT> cation_sites = [] <NEW_LINE> for site in self._structure.sites: <NEW_LINE> <INDENT> if site... | Get all cation-centered polyhedra for a structure
:param radius: (float) distance in Angstroms for bond cutoff
:param anions: (List of Strings) list of species which we consider anions in the structure
:return: (dict) A dictionary with keys corresponding to different cations and the values to the cation's
ECoN coo... | 625941b2d268445f265b4bff |
def __contains__(self, key): <NEW_LINE> <INDENT> return hasattr(self, str(self.HASH_PREFIX + key)) | Return True if our object contains the given key (JSON name). | 625941b2462c4b4f79d1d45a |
def test_x_sw(self): <NEW_LINE> <INDENT> bk = Backend(product='qulacs', device='cpu_simulator') <NEW_LINE> qc = QCirc().x(0).sw(0,1) <NEW_LINE> res = bk.run(qcirc=qc) <NEW_LINE> actual = res.info['quantumstate'].get_vector() <NEW_LINE> expect = reverse_bit_order(np.array([0j, (1+0j), 0j, 0j])) <NEW_LINE> ans = equal_ve... | test 'sw' gate (following 'x' gate, not 'h' gates)
| 625941b2d8ef3951e32432c8 |
def __init__(self, WL, T): <NEW_LINE> <INDENT> self.WL = WL <NEW_LINE> self.T = T <NEW_LINE> self.h = [None] * T <NEW_LINE> self.w = np.zeros(T) <NEW_LINE> self.D = None | Parameters
----------
WL : the class of the base weak learner
T : the number of base learners to learn | 625941b23346ee7daa2b2af2 |
def test_size_width_too_big(self): <NEW_LINE> <INDENT> request_path = '/%s/full/3601,/0/default.jpg' % (self.test_jpeg_id,) <NEW_LINE> resp = self.client.get(request_path) <NEW_LINE> self.assertEqual(resp.status_code, 404) | Explicit width in size parameter is larger than image size. | 625941b22ae34c7f2600cec4 |
def predict(self, src_seq): <NEW_LINE> <INDENT> src_id_seq = Variable(torch.LongTensor([self.src_vocab.stoi[tok] for tok in src_seq]), volatile=True).view(1, -1) <NEW_LINE> if torch.cuda.is_available(): <NEW_LINE> <INDENT> src_id_seq = src_id_seq.cuda() <NEW_LINE> <DEDENT> decoder_kick = Variable(torch.LongTensor([self... | Make prediction given `src_seq` as input.
Args:
src_seq (list): list of tokens in source language
Returns:
tgt_seq (list): list of tokens in target language as predicted
by the pre-trained model | 625941b267a9b606de4a7c4e |
def fit_base_models(self, data, labels, oos_data, oos_labels=None): <NEW_LINE> <INDENT> dataset_blend_train = [] <NEW_LINE> dataset_blend_oos = [] <NEW_LINE> for i, model in enumerate(self.base_models): <NEW_LINE> <INDENT> train_predictions, oos_predictions = self.cv_fit_model(model, data, labels, oos_data, oos_labels)... | Helper method called by fit_predict method that fits the training data to the base models and makes
predictions on train and oos datasets. The method then blends/combines the predictions and
returns the newly generated train and oos datasets.
Parameters
----------
data: numpy array, shape: (number_of_samples, number... | 625941b2925a0f43d2549bfd |
def induce_program(self, output, timestep): <NEW_LINE> <INDENT> label = torch.max(output, 1)[1].data.cpu().numpy() <NEW_LINE> for index in range(self.batch_size): <NEW_LINE> <INDENT> exp = self.unique_draws[label[index]] <NEW_LINE> self.expressions[index] += exp <NEW_LINE> program = self.tokenizer(self.expressions[inde... | Induces the program by taking current output from the network,
returns the simulated stack. Also takes care of the validity of
the programs. Currently, as soon as the program is recognized to be
wrong, it just drops it.
For invalid programs we produce empty canvas.
For programs that stop and are valid, the state of ... | 625941b256b00c62f0f143e6 |
def levelOrder(self, root): <NEW_LINE> <INDENT> if not root: return [] <NEW_LINE> ans = [] <NEW_LINE> q = deque([root]) <NEW_LINE> while q: <NEW_LINE> <INDENT> current_level, size = [], len(q) <NEW_LINE> for _ in range(size): <NEW_LINE> <INDENT> node = q.popleft() <NEW_LINE> current_level.append(node.val) <NEW_LINE> if... | :type root: TreeNode
:rtype: List[List[int]] | 625941b250812a4eaa59c0b2 |
def make_kk_task(self,**kwargs): <NEW_LINE> <INDENT> from ..utils import KKflow <NEW_LINE> self.kktask = KKflow( dirname = os.path.join(self.dirname, '00-KK'), **kwargs) <NEW_LINE> self.add_task(self.kktask) <NEW_LINE> kwargs.update( tetrahedra_fname = self.kktask.tetrahedra_fname, symmetries_fname = self.kktask.symmet... | Run KK flow.
Initialize parameters for tetrahedrum integration. | 625941b25fdd1c0f98dbffc4 |
def compile(self, tmp_dir=None, verbose=False): <NEW_LINE> <INDENT> base_prefix = self.prefix <NEW_LINE> if tmp_dir is None: <NEW_LINE> <INDENT> codedir = tempfile.mkdtemp(".pydy_compile") <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> codedir = os.path.abspath(tmp_dir) <NEW_LINE> <DEDENT> if not os.path.exists(codedir)... | Returns a function which evaluates the matrices.
Parameters
==========
tmp_dir : string
The path to an existing or non-existing directory where all of
the generated files will be stored.
verbose : boolean
If true the output of the completed compilation steps will be
printed. | 625941b2d18da76e2353225b |
def _make_bowtie_index(self, fullLengthSeqs): <NEW_LINE> <INDENT> tmp = tempfile.NamedTemporaryFile(delete=False) <NEW_LINE> for name, seq in fullLengthSeqs.items(): <NEW_LINE> <INDENT> tmp.write('>%s\n%s\n' % (name, seq)) <NEW_LINE> <DEDENT> tmp.close() <NEW_LINE> bi = amplishot.app.bowtie.Bowtie2Build(WorkingDir=self... | write the full-length sequences to file then index them with
bowtie. After which the tmpfile can be deleted. | 625941b2a17c0f6771cbdde8 |
def rearm_idle(self, *largs): <NEW_LINE> <INDENT> if not hasattr(self, "idle_timer"): <NEW_LINE> <INDENT> return <NEW_LINE> <DEDENT> if self.idle_timer is None: <NEW_LINE> <INDENT> self.dispatch("on_wakeup") <NEW_LINE> <DEDENT> self.idle_timer = monotonic() | Rearm the idle timer | 625941b2462c4b4f79d1d45b |
def test_thumb_object(): <NEW_LINE> <INDENT> path = os.path.join(test_location, "armel", "i2c_api.o") <NEW_LINE> l = cle.Loader(path, rebase_granularity=0x1000) <NEW_LINE> for r in l.main_object.relocs: <NEW_LINE> <INDENT> if r.__class__ == cle.backends.elf.relocation.arm.R_ARM_THM_JUMP24: <NEW_LINE> <INDENT> if r.symb... | Test for an object file I ripped out of an ARM firmware HAL.
Uses some nasty relocs
:return: | 625941b24a966d76dd550d9a |
def createDatabaseConnection(dbFile=config['SQLITE']['DATABASE']): <NEW_LINE> <INDENT> conn = None <NEW_LINE> try: <NEW_LINE> <INDENT> conn = sqlite3.connect(dbFile, timeout=40, check_same_thread=False) <NEW_LINE> conn.execute('pragma journal_mode=wal') <NEW_LINE> return conn <NEW_LINE> <DEDENT> except Error as e: <NEW... | create a database connection to the SQLite database specified by db_file
:param db_file: database file
:return: Connection object or None | 625941b21d351010ab8558b1 |
def __init__(self, throttling=None, local_vars_configuration=None): <NEW_LINE> <INDENT> if local_vars_configuration is None: <NEW_LINE> <INDENT> local_vars_configuration = Configuration() <NEW_LINE> <DEDENT> self.local_vars_configuration = local_vars_configuration <NEW_LINE> self._throttling = None <NEW_LINE> self.disc... | InlineResponse20011 - a model defined in OpenAPI | 625941b20a366e3fb873e5a2 |
def array_string_csv_null( self, array_query=None, custom_headers={}, raw=False, **operation_config): <NEW_LINE> <INDENT> url = '/queries/array/csv/string/null' <NEW_LINE> query_parameters = {} <NEW_LINE> if array_query is not None: <NEW_LINE> <INDENT> query_parameters['arrayQuery'] = self._serialize.query("array_query... | Get a null array of string using the csv-array format
:param array_query: a null array of string using the csv-array format
:type array_query: list of str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param opera... | 625941b2097d151d1a222bef |
def __init__(self, docs): <NEW_LINE> <INDENT> self.docs_ = docs <NEW_LINE> self.id2word = {} <NEW_LINE> self.word2id = {} <NEW_LINE> self.vocab_ = set() <NEW_LINE> self.__build_vocab() <NEW_LINE> self.__build_id2word() | Initialize Vocabulary data structure with a collection of documents
:param docs: Documents | 625941b28e7ae83300e4ad5f |
def __create_lnk(self, lnk_config, output_file): <NEW_LINE> <INDENT> arguments = [] <NEW_LINE> arguments.append(self.wine) <NEW_LINE> arguments.append(self.mklnk) <NEW_LINE> arguments.append("-t") <NEW_LINE> arguments.append(lnk_config["target_path"]) <NEW_LINE> arguments.append("-o") <NEW_LINE> arguments.append(output... | Creates a .lnk file based on the given config.
Parameters
----------
link_config: dict
The .lnk attributes configuration
output_file: str
The output file to be created
Returns
----------
sub_output: str
The subprocess output | 625941b24527f215b584c1f0 |
def process_should_be_running(self, handle=None, error_message='Process is not running.'): <NEW_LINE> <INDENT> if not self.is_process_running(handle): <NEW_LINE> <INDENT> raise AssertionError(error_message) | Verifies that the process is running.
If `handle` is not given, uses the current `active process`.
Fails if the process has stopped. | 625941b26fece00bbac2d4c6 |
def nameGenerator(self) -> 'UniqueNameGenerator': <NEW_LINE> <INDENT> return typing.cast('UniqueNameGenerator', self.idGenerators('name')) | Utility method to access provided names generator (inside environment)
Returns the environment unique name generator | 625941b2be7bc26dc91cd39a |
def hamming_distance(pt1, pt2): <NEW_LINE> <INDENT> distance = 0 <NEW_LINE> for i in range(len(pt1)): <NEW_LINE> <INDENT> if pt1[i] != pt2[i]: <NEW_LINE> <INDENT> distance += 1 <NEW_LINE> <DEDENT> <DEDENT> print(f'Hamming Distance for {pt1}, {pt2}:\n\t{distance}') <NEW_LINE> return distance | Calculate the distance between two points using the
Hamming Distance.
Scipy method: scipy.spatial.distance.hamming
Note: the Scipy method returns a float between 0
and 1, as a resulting of dividing the distance
by the number of dimensions of the lists.
For each dimension that does not have the same value
in both poin... | 625941b2e5267d203edcda2f |
def project_plans(self, project_key): <NEW_LINE> <INDENT> resource = 'project/{}'.format(project_key, max_results=25) <NEW_LINE> return self.base_list_call(resource, expand='plans', favourite=False, clover_enabled=False, max_results=25, elements_key='plans', element_key='plan') | Returns a generator with the plans in a given project
:param project_key: Project key
:return: Generator with plans | 625941b245492302aab5e04b |
def getClassTypeId(): <NEW_LINE> <INDENT> return _simvoleon.SoVolumeTriangleStripSet_getClassTypeId() | getClassTypeId() -> SoType | 625941b20383005118ecf371 |
def test_add_translation(self): <NEW_LINE> <INDENT> obj = self.get() <NEW_LINE> resp = obj.add_translation("nl_BE") <NEW_LINE> self.assertEqual(resp["data"]["id"], 827) <NEW_LINE> self.assertEqual( resp["data"]["revision"], "da6ea2777f61fbe1d2a207ff6ebdadfa15f26d1a" ) | Perform verification that the correct endpoint is accessed. | 625941b2956e5f7376d70c09 |
def ubrmse(actual: np.ndarray, predicted: np.ndarray): <NEW_LINE> <INDENT> return np.sqrt(np.nansum((actual-(predicted-np.nanmean(actual)))**2)/len(actual)) | unbiased Root Mean Squared Error | 625941b2f548e778e58cd30e |
def __init__(self, *args, **kwargs): <NEW_LINE> <INDENT> if args and isinstance(args[0], list): <NEW_LINE> <INDENT> self.polynom = args[0] <NEW_LINE> <DEDENT> elif args: <NEW_LINE> <INDENT> self.polynom = args <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> self.polynom = [kwargs.get(x, 0) for x in ['x' + str(i) for i in... | Polynom initialization with args kwargs and args[0]=List | 625941b215fb5d323cde089b |
def solveSudoku( board): <NEW_LINE> <INDENT> usenum = ['1', '2', '3', '4', '5', '6', '7', '8', '9'] <NEW_LINE> def getnumlist(i, j, board): <NEW_LINE> <INDENT> used = dict() <NEW_LINE> for k in range(9): <NEW_LINE> <INDENT> if board[k][j] != '.': <NEW_LINE> <INDENT> if board[k][j] not in used: <NEW_LINE> <INDENT> used[... | :type board: List[List[str]]
:rtype: None Do not return anything, modify board in-place instead. | 625941b28c3a87329515814a |
def load_data(train_path, valid_path, test_path): <NEW_LINE> <INDENT> train = pd.read_csv(train_path, header=None) <NEW_LINE> valid = pd.read_csv(valid_path, header=None) <NEW_LINE> test = pd.read_csv( test_path, header=None) <NEW_LINE> train_filenames = train[0] <NEW_LINE> train_labels = train[1] <NEW_LINE> valid_... | Returns the ILSVRC dataset as (train_x, train_y), (test_x, test_y). | 625941b28a349b6b435e7f09 |
def diff(self, n, axis=0): <NEW_LINE> <INDENT> if axis == 0: <NEW_LINE> <INDENT> raise NotImplementedError <NEW_LINE> <DEDENT> new_values = (self.values - self.shift(n, axis=axis)[0].values).asi8 <NEW_LINE> new_values = new_values.reshape(1, len(new_values)) <NEW_LINE> new_values = new_values.astype('timedelta64[ns]') ... | 1st discrete difference
Parameters
----------
n : int, number of periods to diff
axis : int, axis to diff upon. default 0
Return
------
A list with a new TimeDeltaBlock.
Note
----
The arguments here are mimicking shift so they are called correctly
by apply. | 625941b2796e427e537b034d |
def test_another_nested_proxy_field_model_serializer_depth(self): <NEW_LINE> <INDENT> self._nested_proxy_field_model_serializer_depth( self.proxy_author_listing_url ) | Test NestedProxyField and ModelSerializer with more depth. | 625941b27c178a314d6ef1e4 |
def test_get_readable_by_date_expired_key(self): <NEW_LINE> <INDENT> expKey = Key.objects.get(key="exp_key") <NEW_LINE> try: <NEW_LINE> <INDENT> DataStream.objects.get_readable_by_key(expKey) <NEW_LINE> <DEDENT> except Exception as e: <NEW_LINE> <INDENT> self.assertEqual(str(e), "None is not a valid key.") | ' Test that a Key which has an expired date raises an exception when using DataStream's get_readable_by_key | 625941b299fddb7c1c9de127 |
def test_append_paths(self): <NEW_LINE> <INDENT> if self.MODULE_GENERATOR_CLASS == ModuleGeneratorTcl: <NEW_LINE> <INDENT> expected = ''.join([ "append-path\tkey\t\t$root/path1\n", "append-path\tkey\t\t$root/path2\n", "append-path\tkey\t\t$root\n", ]) <NEW_LINE> paths = ['path1', 'path2', ''] <NEW_LINE> self.assertEqua... | Test generating append-paths statements. | 625941b2de87d2750b85fb19 |
def main(argv): <NEW_LINE> <INDENT> usage = 'Usage: %prog --out=merged_csv_file input_csv_files...' <NEW_LINE> parser = optparse.OptionParser(usage=usage) <NEW_LINE> parser.add_option('--out', dest='outpath', type='string', action='store', default=None, help='File to write merged results to') <NEW_LINE> (options, args)... | Main function. | 625941b230bbd722463cbb56 |
def get_flavor(self, name_or_id, filters=None, get_extra=True): <NEW_LINE> <INDENT> search_func = functools.partial( self.search_flavors, get_extra=get_extra) <NEW_LINE> return _utils._get_entity(self, search_func, name_or_id, filters) | Get a flavor by name or ID.
:param name_or_id: Name or ID of the flavor.
:param filters:
A dictionary of meta data to use for further filtering. Elements
of this dictionary may, themselves, be dictionaries. Example::
{
'last_name': 'Smith',
'other': {
'gender': 'Femal... | 625941b2b5575c28eb68dd8a |
def harvesine_distance(loc1, loc2): <NEW_LINE> <INDENT> lat1, long1 = loc1 <NEW_LINE> lat2, long2 = loc2 <NEW_LINE> lat1 = lat1*math.pi/180 <NEW_LINE> lat2 = lat2*math.pi/180 <NEW_LINE> long1 = long1*math.pi/180 <NEW_LINE> long2 = long2*math.pi/180 <NEW_LINE> dlat = (lat2-lat1) <NEW_LINE> dlong = (long2-long1) <NEW_LIN... | input: locations as lat, long
output: harvesine_distance | 625941b2507cdc57c6306a5f |
def p_LValue_lvalue_ID( p ): <NEW_LINE> <INDENT> p[0] = PT_LValue_Period_Id( p[1], p[3]) | lvalue : lvalue PERIOD IDENTIFIER | 625941b28a43f66fc4b53dff |
def _add_fields(self): <NEW_LINE> <INDENT> for name, kind in ((str(self.elevation_attribute), ogr.OFTReal), (str(self.feature_id_attribute), ogr.OFTInteger)): <NEW_LINE> <INDENT> definition = ogr.FieldDefn(name, kind) <NEW_LINE> self.layer.CreateField(definition) | Create extra fields. | 625941b23346ee7daa2b2af5 |
def create_app(): <NEW_LINE> <INDENT> app = jobmonitor.create_app() <NEW_LINE> app.config.from_object('monitoring_app.config') <NEW_LINE> if not app.debug: <NEW_LINE> <INDENT> add_logging(app) <NEW_LINE> <DEDENT> example = Blueprint('example', __name__, template_folder='templates', static_folder='static', static_url_pa... | Create a Flask application deriving from jobmonitor. | 625941b232920d7e50b27f61 |
def SetExtractionRegion(self, *args): <NEW_LINE> <INDENT> return _itkExtractImageFilterPython.itkExtractImageFilterIVF33IVF33_SetExtractionRegion(self, *args) | SetExtractionRegion(self, itkImageRegion3 extractRegion) | 625941b21f5feb6acb0c48ea |
def getMenuNamed(self, menuName): <NEW_LINE> <INDENT> if self.a11yAppName is None: <NEW_LINE> <INDENT> self.a11yAppName = self.internCommand <NEW_LINE> <DEDENT> app = root <NEW_LINE> apps = root.applications() <NEW_LINE> for i in apps: <NEW_LINE> <INDENT> if i.name.lower() == self.a11yAppName: <NEW_LINE> <INDENT> app =... | Return submenu with name specified with 'menuName' | 625941b2bde94217f3682b8c |
def test_ack(self): <NEW_LINE> <INDENT> namespace = self.socketIO.define(Namespace) <NEW_LINE> self.socketIO.emit( 'trigger_server_expects_callback', deepcopy(PAYLOAD)) <NEW_LINE> self.socketIO.wait(self.wait_time_in_seconds) <NEW_LINE> self.assertEqual(namespace.args_by_event, { 'server_expects_callback': (PAYLOAD,), ... | Respond to a server callback request | 625941b23539df3088e2e0d9 |
def cached_url(url): <NEW_LINE> <INDENT> folder = 'cached' <NEW_LINE> filename = url.rsplit('/')[-2] + '.html' <NEW_LINE> path = os.path.join(folder, filename) <NEW_LINE> if os.path.exists(path): <NEW_LINE> <INDENT> with open(path, 'rb') as f: <NEW_LINE> <INDENT> return f.read() <NEW_LINE> <DEDENT> <DEDENT> else: <NEW_... | 缓存,避免重复下载
:param url:
:return: | 625941b2e64d504609d745d3 |
def p_BoolExpr_EQ(p): <NEW_LINE> <INDENT> p[0] = BinaryBoolNode(p[1], p[2], p[3]) <NEW_LINE> p[0].pos_info = getPosition(p, 0) | BoolExpr : Expr EQ Expr | 625941b2097d151d1a222bf1 |
def display_series(self): <NEW_LINE> <INDENT> data_mgr = DatabaseManager(Config().database_name, None) <NEW_LINE> if self.list_series.currentItem(): <NEW_LINE> <INDENT> series_rowid = self.list_series.currentItem().data(Qt.UserRole) <NEW_LINE> cur = data_mgr.query("SELECT rowid, * FROM Series WHERE rowid = %d" % series... | Retrieves and displays info for selected series.
This function retrieves the unique rowid for the selected
series and retrieves the series from the database. It then
updates all main window elements which show series info to
show up-to-date properties. Once all series information is
properly displayed, buttons which c... | 625941b2e8904600ed9f1cb7 |
def demo(): <NEW_LINE> <INDENT> if BibleOrgSysGlobals.verbosityLevel > 1: print( ProgNameVersion ) <NEW_LINE> if BibleOrgSysGlobals.commandLineOptions.export: <NEW_LINE> <INDENT> bbosc = BibleBookOrdersConverter().loadSystems() <NEW_LINE> bbosc.pickle() <NEW_LINE> bbosc.exportDataToPython() <NEW_LINE> bbosc.exportDataT... | Main program to handle command line parameters and then run what they want. | 625941b224f1403a92600900 |
def hasCycle(self, head): <NEW_LINE> <INDENT> if not head: <NEW_LINE> <INDENT> return False <NEW_LINE> <DEDENT> walker = head <NEW_LINE> runner = head <NEW_LINE> while runner.next and runner.next.next: <NEW_LINE> <INDENT> walker = walker.next <NEW_LINE> runner = runner.next.next <NEW_LINE> if walker == runner: <NEW_LIN... | :type head: ListNode
:rtype: bool | 625941b28c0ade5d55d3e74d |
def _AddToNewSizer(self, sizer, props): <NEW_LINE> <INDENT> for child in self.GetChildren(): <NEW_LINE> <INDENT> csp = props.get(child.GetId(), None) <NEW_LINE> if csp is not None: <NEW_LINE> <INDENT> self.GetSizer().Add(child) <NEW_LINE> child.SetSizerProps(csp) | Add children to new sizer.
:param `sizer`: param is not used, remove it ???
:param `props`: sizer properties | 625941b285dfad0860c3abe6 |
def learn( targets, numTrees=10, path="", regression=False, advice=False, softm=False, alpha=0.0, beta=0.0, saveJson=True, ): <NEW_LINE> <INDENT> models = {} <NEW_LINE> for target in targets: <NEW_LINE> <INDENT> trainData = Utils.readTrainingData( target, path=path, regression=regression, advice=advice, softm=softm, al... | .. versionadded:: 0.3.0
Learn a relational dependency network from facts and positive/negative
examples via relational regression trees.
.. note:: This currently requires that training data is stored as files
on disk.
:param targets: List of target predicates to learn models for.
:type targets: list of str... | 625941b2596a897236089859 |
def draw( self, ventana ): <NEW_LINE> <INDENT> ventana.blit( self.image, self.rect ) | Muestra al personaje en pantalla. | 625941b245492302aab5e04d |
def call(self, context, x, losses=None): <NEW_LINE> <INDENT> memory_antecedent = self._get_memory_antecedent(context) <NEW_LINE> memory_input_dim = memory_antecedent.shape[-1] <NEW_LINE> if memory_input_dim != context.model.model_dim: <NEW_LINE> <INDENT> raise NotImplementedError( "TODO(noam): support different model_d... | Call the layer. | 625941b2a79ad161976cbed4 |
def _get_authorized_password(): <NEW_LINE> <INDENT> return [config.get("secure_uninstall"), config.get("admin_passwd")] | You can define your own authorized keys
| 625941b291af0d3eaac9b7a1 |
def index(request): <NEW_LINE> <INDENT> u = current_user(request) <NEW_LINE> body = template('weibo_index.html') <NEW_LINE> return http_response(body) | 主页的处理函数, 返回主页的响应 | 625941b27c178a314d6ef1e5 |
def jerr(r): <NEW_LINE> <INDENT> rc = r['return'] <NEW_LINE> re = r['error'] <NEW_LINE> out('Error: '+re) <NEW_LINE> raise KeyboardInterrupt | Print error message for CK functions in the Jupyter Notebook and raise KeyboardInterrupt
Target audience: end users
Used in Jupyter Notebook
Example:
import ck.kernel as ck
r=ck.access({'action':'load', 'module_uoa':'tmp', 'data_uoa':'some tmp entry'})
if r['return']>0: ck.jerr(r)
Args: ... | 625941b2d486a94d0b98dede |
def prep_http_method(self, method): <NEW_LINE> <INDENT> method.view_class = self.viewClass <NEW_LINE> method.init_kwargs = self.kwargs <NEW_LINE> return method | To emulate View.as_view() we could do this on EACH http method. Normally as_view is only made with one. | 625941b250485f2cf553cb26 |
def metadata(self, request): <NEW_LINE> <INDENT> metadata = super(AmCATMetadataMixin, self).metadata(request) <NEW_LINE> metadata['label'] = self.get_label() <NEW_LINE> grfm = api.rest.resources.get_resource_for_model <NEW_LINE> metadata['models'] = {name : grfm(field.queryset.model).get_url() for (name, field) in self... | This is used by the OPTIONS request; add models, fields, and label for datatables | 625941b276d4e153a657e8be |
@click.command() <NEW_LINE> @click.argument('term') <NEW_LINE> def run(term): <NEW_LINE> <INDENT> word = TurkishWord(input) <NEW_LINE> word.query() <NEW_LINE> output = word.meaning <NEW_LINE> click.echo(output) | A command line tool to query meaning of Turkish word from official dictionary. | 625941b20fa83653e4656d53 |
def purgeBrackets(string): <NEW_LINE> <INDENT> if not string: <NEW_LINE> <INDENT> return False <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> string = string.replace("<", "").replace(">", "") <NEW_LINE> return string | Get rid of <> around a string | 625941b2dc8b845886cb52c2 |
def test_llist_find_from_positive_non_existent_key(self): <NEW_LINE> <INDENT> elements_list = TestLList.llist_integer.find_from( 21, 2, {'timeout': 1000}) <NEW_LINE> assert elements_list == [56, 122] | Invoke find_from() to access elements from a non-existent key | 625941b2796e427e537b0350 |
def _ascii_to_hex(symbol): <NEW_LINE> <INDENT> return symbol.encode("hex") | encode an ASCII symbol to an hex char | 625941b2f8510a7c17cf9494 |
@Pipe <NEW_LINE> def _add_move_issue_subparser(subparsers: _SubParsersAction): <NEW_LINE> <INDENT> move_issue_parser = subparsers.add_parser("move", aliases=["mv"], help="Moves an issue from one status to another") <NEW_LINE> move_issue_parser.add_argument("transition", metavar="TRANSITION", help="The transition to per... | Creates a subparser for logging work | 625941b2de87d2750b85fb1b |
def draw_geometry(self, painter): <NEW_LINE> <INDENT> painter.ellipse(self.frame.x1, self.frame.y1, self.frame.x2, self.frame.y2) | Отрисовка эллипса
:param painter: средство рисования | 625941b273bcbd0ca4b2be0b |
def _record(self, value, rank, delta, successor): <NEW_LINE> <INDENT> return _Sample(value, rank, delta, successor) | Catalogs a sample. | 625941b27047854f462a11a4 |
def setUp(self): <NEW_LINE> <INDENT> self.register_endpoint = reverse('register') <NEW_LINE> self.login_endpoint = reverse('login') <NEW_LINE> self.comment_reactions_endpoint = reverse('comment_reactions') <NEW_LINE> self.user = { "user": { "username": "username_tu", "email": "user@mymail.com", "password": "#Strong2-pa... | Set up | 625941b2046cf37aa974cada |
def _winrm_connect(self): <NEW_LINE> <INDENT> display.vvv("ESTABLISH WINRM CONNECTION FOR USER: %s on PORT %s TO %s" % (self._winrm_user, self._winrm_port, self._winrm_host), host=self._winrm_host) <NEW_LINE> netloc = '%s:%d' % (self._winrm_host, self._winrm_port) <NEW_LINE> endpoint = urlunsplit((self._winrm_scheme, n... | Establish a WinRM connection over HTTP/HTTPS. | 625941b2e1aae11d1e749a49 |
def add_discovery(self, device, address): <NEW_LINE> <INDENT> self.post('discovery', params={'device': device, 'address': address}) | Add an entry to the discovery cache.
Args:
device (str): Device ID.
address (str): destination address, a valid hostname or
IP address that's serving a Syncthing instance.
Returns:
None | 625941b23346ee7daa2b2af7 |
def prepare_data_directory(): <NEW_LINE> <INDENT> if not os.path.exists(data_directory): <NEW_LINE> <INDENT> os.makedirs(data_directory) | create the target directory | 625941b215baa723493c3d06 |
def reverse_list_in_place(items): <NEW_LINE> <INDENT> swap_number = custom_len(input_list) // 2 <NEW_LINE> for i in range(swap_number): <NEW_LINE> <INDENT> current_n = input_list[i] <NEW_LINE> current_neg_n = input_list[(i + 1) * -1] <NEW_LINE> input_list[i] = current_neg_n <NEW_LINE> input_list[(i + 1) * -1] = current... | Reverse the input list `in place`.
Reverse the input list given, but do it "in place" --- that is,
do not create a new list and return it, but modify the original
list.
**Do not use** the python function `reversed()` or the method
`list.reverse()`.
For example::
>>> orig = [1, 2, 3]
>>> reverse_list_in_place(... | 625941b2b5575c28eb68dd8c |
def dice(predictions, labels, num_classes): <NEW_LINE> <INDENT> dice_scores = np.zeros((num_classes)) <NEW_LINE> for i in range(num_classes): <NEW_LINE> <INDENT> tmp_den = (np.sum(predictions == i) + np.sum(labels == i)) <NEW_LINE> tmp_dice = 2. * np.sum((predictions == i) * (labels == i)) / tmp_den if tmp_d... | Calculates the categorical Dice similarity coefficients for each class
between labels and predictions.
Args:
predictions (np.ndarray): predictions
labels (np.ndarray): labels
num_classes (int): number of classes to calculate the dice
coefficient for
Returns:
np.ndarray: dice coefficient pe... | 625941b226068e7796caea6f |
def setUp(self): <NEW_LINE> <INDENT> super(AtStyleSchedulerTests, self).setUp() <NEW_LINE> response = self.autoscale_behaviors.create_scaling_group_given( lc_name='at_style_scheduled', gc_cooldown=0) <NEW_LINE> self.group = response.entity <NEW_LINE> self.resources.add(self.group, self.empty_scaling_group) | Create a scaling group with minentities=0 and cooldown=0 | 625941b2fff4ab517eb2f1cf |
def test04_itersorted8(self): <NEW_LINE> <INDENT> table = self.table <NEW_LINE> sortedtable = numpy.sort(table[:], order='icol')[55:33:-5] <NEW_LINE> sortedtable2 = numpy.array( [row.fetch_all_fields() for row in table.itersorted( 'icol', start=55, stop=33, step=-5)], dtype=table._v_dtype) <NEW_LINE> if verbose: <NEW_L... | Testing the Table.itersorted() method with a start, stop and
negative step. | 625941b2a17c0f6771cbddec |
def rgb_to_cmyk(rgb): <NEW_LINE> <INDENT> r = rgb.red / 255.0 <NEW_LINE> g = rgb.green / 255.0 <NEW_LINE> b = rgb.blue / 255.0 <NEW_LINE> c = 1 - r <NEW_LINE> m = 1 - g <NEW_LINE> y = 1 - b <NEW_LINE> if c == 1 and m == 1 and y == 1: <NEW_LINE> <INDENT> return colormodel.CMYK(0.0,0.0,0.0,1.0*100.0) <NEW_LINE> <DEDENT> ... | Returns: color rgb in space CMYK, with the most black possible.
Formulae from en.wikipedia.org/wiki/CMYK_color_model.
Parameter rgb: the color to convert to a CMYK object
Precondition: rgb is an RGB object | 625941b23539df3088e2e0da |
def getNetworkSwitchRoutingMulticastRendezvousPoint(self, networkId: str, rendezvousPointId: str): <NEW_LINE> <INDENT> metadata = { 'tags': ['switch', 'configure', 'routing', 'multicast', 'rendezvousPoints'], 'operation': 'getNetworkSwitchRoutingMulticastRendezvousPoint' } <NEW_LINE> resource = f'/networks/{networkId}/... | **Return a multicast rendezvous point**
https://developer.cisco.com/meraki/api-v1/#!get-network-switch-routing-multicast-rendezvous-point
- networkId (string): (required)
- rendezvousPointId (string): (required) | 625941b21d351010ab8558b5 |
def getPremiumInfo(self, authenticationToken): <NEW_LINE> <INDENT> self.send_getPremiumInfo(authenticationToken) <NEW_LINE> return self.recv_getPremiumInfo() | Returns information regarding a user's Premium account corresponding to the
provided authentication token, or throws an exception if this token is not
valid.
Parameters:
- authenticationToken | 625941b2ec188e330fd5a540 |
def reset(self): <NEW_LINE> <INDENT> self.clear_cache() <NEW_LINE> self.clear_data() <NEW_LINE> self.clear_settings() | Delete workflow settings, cache and data.
File :attr:`settings <settings_path>` and directories
:attr:`cache <cachedir>` and :attr:`data <datadir>` are deleted. | 625941b28e05c05ec3eea107 |
def __repr__(self): <NEW_LINE> <INDENT> return u'{}(return_code={!r}, stderr={!r}, msg={!r}'.format( type(self).__name__, self.return_code, self.stderr, self.msg ) | Include class name return_code, stderr and msg to improve logging
| 625941b2e5267d203edcda31 |
@pytest.fixture( params = get_test_data_list()) <NEW_LINE> def get_my_test_data(request): <NEW_LINE> <INDENT> return request.param | My custom fixture. By using this fixture pytest executes the test method for each element in the list params.
:param request: A request for a fixture from a test or fixture function. A request object gives access to the requesting test context and has an optional param attribute in case the fixture is parametrized in... | 625941b26aa9bd52df036b31 |
def slidingPuzzle(self, board): <NEW_LINE> <INDENT> target = '123450' <NEW_LINE> start = ''.join(str(i) for tiles in board for i in tiles) <NEW_LINE> moves = [[1,3],[0,2,4],[1,5],[0,4],[1,3,5],[2,4]] <NEW_LINE> current_level, next_level = [start], [] <NEW_LINE> result = 0 <NEW_LINE> visited = set() <NEW_LINE> while cur... | :type board: List[List[int]]
:rtype: int | 625941b20a50d4780f666c1e |
def flip_query_coords(self, n): <NEW_LINE> <INDENT> qs = self.qs <NEW_LINE> self.qs = n - self.qe <NEW_LINE> self.qe = n - qs | Flip the coordinates with respect to the query with n fragments | 625941b24527f215b584c1f4 |
def onOK(self): <NEW_LINE> <INDENT> fld = self.getField(True) <NEW_LINE> if fld.name == "": <NEW_LINE> <INDENT> QMessageBox.critical(self, self.tr("DB Manager"), self.tr("field name must not be empty")) <NEW_LINE> return <NEW_LINE> <DEDENT> if fld.dataType == "": <NEW_LINE> <INDENT> QMessageBox.critical(self, self.tr("... | first check whether everything's fine | 625941b2046cf37aa974cadb |
def verify_profile_pin(guid): <NEW_LINE> <INDENT> if not g.LOCAL_DB.get_profile_config('isPinLocked', False, guid=guid): <NEW_LINE> <INDENT> return True <NEW_LINE> <DEDENT> pin = ask_for_pin(common.get_local_string(30006)) <NEW_LINE> return None if not pin else verify_profile_lock(guid, pin) | Verify if the profile is locked by a PIN and ask the PIN | 625941b2460517430c393f22 |
def _parse_area_source(element): <NEW_LINE> <INDENT> ID, name, tect_reg = _get_id_name_tect_reg(element) <NEW_LINE> polygon = _get_polygon(element) <NEW_LINE> mfd = _get_mfd(element) <NEW_LINE> return AreaSourceNRML04(polygon, mfd) | Parse NRML 0.4 area source element. | 625941b2956e5f7376d70c0d |
def rand_points_with_push(n, box, sep): <NEW_LINE> <INDENT> x_coord = np.random.randint(box[0], box[1], (1, n)).astype(float) <NEW_LINE> y_coord = np.random.randint(box[2], box[3], (1, n)).astype(float) <NEW_LINE> return _push_points(x_coord, y_coord, box, sep) | Generate a set of n random points within a box.
Box should be a tuple containing the minimum and maximum co-ordinates
desired in the form (xmin, xmax, ymin, ymax) | 625941b25f7d997b8717482b |
def menu(runtime): <NEW_LINE> <INDENT> page_refresh() <NEW_LINE> sleep(1) <NEW_LINE> print('1) Ping a single host for the default time.') <NEW_LINE> print('2) Ping a number of hosts for the default time.') <NEW_LINE> print('3) Ping each host in a path for the default time.') <NEW_LINE> print('4) Change the default time... | Menu system for user interaction. | 625941b250485f2cf553cb28 |
def handle_add_permissions_to_key(request, service): <NEW_LINE> <INDENT> service_name = request.get('name') <NEW_LINE> group_name = request.get('group') <NEW_LINE> group_namespace = request.get('group-namespace') <NEW_LINE> if group_namespace: <NEW_LINE> <INDENT> group_name = "{}-{}".format(group_namespace, group_name)... | Groups are defined by the key cephx.groups.(namespace-)?-(name). This key
will contain a dict serialized to JSON with data about the group, including
pools and members.
A group can optionally have a namespace defined that will be used to
further restrict pool access. | 625941b2796e427e537b0352 |
def nodes_status(self, client: str = "", node: int = None): <NEW_LINE> <INDENT> client_plugin = self._select_client(instance_id=client).plugin <NEW_LINE> nodes = client_plugin.nodes() <NEW_LINE> if node is None: <NEW_LINE> <INDENT> return cli_output(list(node.status.to_dict() for node in nodes)) <NEW_LINE> <DEDENT> ret... | Get info about a client plugin. | 625941b27c178a314d6ef1e8 |
def check_best_score(stats, sb): <NEW_LINE> <INDENT> if stats.best_score > stats.step: <NEW_LINE> <INDENT> stats.best_score = stats.step <NEW_LINE> sb.prep_best_score() | 检查是否诞生了最好成绩 | 625941b23317a56b869399fc |
def _add_defaults_optional(self): <NEW_LINE> <INDENT> pass | We don't want any of the default optional files.
This is setup.cfg, pyproject.toml, and test/test*.py, all from the
toplevel, which we don't want. | 625941b282261d6c526ab234 |
def test_contains(self): <NEW_LINE> <INDENT> t = StringTrie() <NEW_LINE> test_keys = [u'', u'f', u'foo', u'foobar', u'baz'] <NEW_LINE> for key in test_keys: <NEW_LINE> <INDENT> t[key] = key <NEW_LINE> <DEDENT> for key in test_keys: <NEW_LINE> <INDENT> self.assertTrue(key in t) <NEW_LINE> <DEDENT> for key in [u'x', u'fb... | Test the contains operator. | 625941b260cbc95b062c62d9 |
def new(self, notebody='', category=''): <NEW_LINE> <INDENT> cid = self.find_category(name=category) <NEW_LINE> if category and not cid: <NEW_LINE> <INDENT> cid = str(uuid.uuid4()) <NEW_LINE> self.categories[cid]={'name':category} <NEW_LINE> <DEDENT> note = Note(gui_class=self.gui_class, noteset=self, category=cid) <NE... | Creates a new note and adds it to the note set | 625941b29b70327d1c4e0b66 |
def _extract_raw_predictions(self, predictions: Optional[pd.DataFrame] = None) -> pd.DataFrame: <NEW_LINE> <INDENT> if predictions is None: <NEW_LINE> <INDENT> predictions = self.predict <NEW_LINE> <DEDENT> df = predictions.merge(self.avgint, on=['avgint_id']) <NEW_LINE> df = df.merge(self.integrand, on=['integrand_id'... | Grab raw predictions from the predict table.
Or, optionally merge some predictions on the avgint table and integrand table. This
is a work-around when we've wanted to use a different prediction data frame (from using
multithreading) because dismod_at does not allow you to set the predict table. | 625941b28c3a87329515814e |
def get_info(): <NEW_LINE> <INDENT> temp_info = {} <NEW_LINE> with open('info.txt','r',encoding='utf-8') as f: <NEW_LINE> <INDENT> line = f.readline() <NEW_LINE> while line: <NEW_LINE> <INDENT> info = line.rstrip().split(' ') <NEW_LINE> temp_info[info[0]] = int(info[1]) <NEW_LINE> line = f.readline() <NEW_LINE> <DEDENT... | 获取员工数据
:return: 员工数据字典 | 625941b2091ae35668666cfa |
def target_update(self): <NEW_LINE> <INDENT> self.target_net.load_state_dict(self.policy_net.state_dict()) | Update the target network
run this method regularly and makes learning stable. | 625941b22ae34c7f2600cec8 |
def output_sdss_dir(catl_kind='data', catl_type='mr', sample_s='19', Program_Msg=fd.Program_Msg(__file__)): <NEW_LINE> <INDENT> outdir = gp.get_plot_path()+'SDSS/'+catl_kind+'/'+catl_type+'/' <NEW_LINE> outdir += 'Mr'+sample_s <NEW_LINE> fd.Path_Folder(outdir) <NEW_LINE> print('{0} `outdir`: {1}'.format(Program_Msg,... | Output for sdss directorry, either for `data` or `mocks`
Parameters
----------
catl_kind: string, optional (default = 'data')
type of catalogue to use
Options:
- 'data': catalogues comes from SDSS 'real' catalog
- 'mocks': catalogue(s) come from SDSS 'mock' catalogues
catl_type: string, option... | 625941b23346ee7daa2b2af8 |
def generateBoogieVamp(blRealization=None, numRepeats=5): <NEW_LINE> <INDENT> from music21 import converter <NEW_LINE> from music21 import stream <NEW_LINE> from music21 import interval <NEW_LINE> if blRealization is None: <NEW_LINE> <INDENT> bluesLine = twelveBarBlues() <NEW_LINE> fbRules = rules.Rules() <NEW_LINE> fb... | Turns whole notes in twelve bar blues bass line to blues boogie woogie bass line. Takes
in numRepeats, which is the number of times to repeat the bass line. Also, takes in a
realization of :meth:`~music21.figuredBass.examples.twelveBarBlues`. If none is provided,
a default realization with :attr:`~music21.figuredBass.r... | 625941b2de87d2750b85fb1e |
def add_timeseries(self, in_epoch_name, timeseries): <NEW_LINE> <INDENT> epoch_ts = {} <NEW_LINE> if isinstance(timeseries, nwbts.TimeSeries): <NEW_LINE> <INDENT> timeseries_path = timeseries.full_path() <NEW_LINE> <DEDENT> elif isinstance(timeseries, str): <NEW_LINE> <INDENT> timeseries_path = timeseries <NEW_LINE> <D... | Associates time series with epoch. This will create a link
to the specified time series within the epoch and will
calculate its overlaps.
Arguments:
*in_epoch_name* (text) Name that time series will use
in the epoch (this can be different than the actual
time series name)
*timeseries* (text or TimeS... | 625941b2925a0f43d2549c03 |
def simon(message, **kwargs): <NEW_LINE> <INDENT> warnings.warn("SIMON says: {0}".format(message), **kwargs) | The Statistical Interpretation MONitor.
A warning system designed to always remind the user that Simon
is watching him/her.
Parameters
----------
message : string
The message that is thrown
kwargs : dict
The rest of the arguments that are passed to ``warnings.warn`` | 625941b21d351010ab8558b7 |
def p_paren(p): <NEW_LINE> <INDENT> p[0] = p[2] | condition : LPAREN condition RPAREN | 625941b2462c4b4f79d1d461 |
def run_test(self): <NEW_LINE> <INDENT> self.log.info("Compare responses from gewalletinfo RPC and `educacoin-cli getwalletinfo`") <NEW_LINE> cli_response = self.nodes[0].cli.getwalletinfo() <NEW_LINE> rpc_response = self.nodes[0].getwalletinfo() <NEW_LINE> assert_equal(cli_response, rpc_response) <NEW_LINE> self.log.i... | Main test logic | 625941b207d97122c417861b |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.