:“Parameter ‘@i‘ specified but none of the passed arguments have a

本文介绍了在使用MySQLPetapoco时遇到的用户变量问题及其解决方案。通过在数据库连接字符串中启用`AllowUserVariables=true`,并修改SQL语句中单个@为双@(@@),可以成功解决计数器变量不正确的问题,确保查询的正确执行。

mysql petapoco 遇到的参赛变量问题,解决方法分2步:

1、数据库连接字符串允许用户变量:

<add name="xxxx" connectionString="Server=xxxx;port=1234;Database=xxxx;uid=xxxx;pwd=xxxx;AllowUserVariables=true;Charset=utf8mb4;" providerName="MySql.Data.MySqlClient" />

2、单个@换成2个@,如下

Select (@i:=@i+1) as 'MySort'....  修改为  Select (@@i:=@@i+1) as 'MySort',

bartc {bartCause} R Documentation Causal Inference using Bayesian Additive Regression Trees Description Fits a collection of treatment and response models using the Bayesian Additive Regression Trees (BART) algorithm, producing estimates of treatment effects. Usage bartc(response, treatment, confounders, parametric, data, subset, weights, method.rsp = c("bart", "tmle", "p.weight"), method.trt = c("bart", "glm", "none"), estimand = c("ate", "att", "atc"), group.by = NULL, commonSup.rule = c("none", "sd", "chisq"), commonSup.cut = c(NA_real_, 1, 0.05), args.rsp = list(), args.trt = list(), p.scoreAsCovariate = TRUE, use.ranef = TRUE, group.effects = FALSE, crossvalidate = FALSE, keepCall = TRUE, verbose = TRUE, seed = NA_integer_, ...) Arguments response A vector of the outcome variable, or a reference to such in the data argument. Can be continuous or binary. treatment A vector of the binary treatment variable, or a reference to data. confounders A matrix or data frame of covariates to be used in estimating the treatment and response model. Can also be the right-hand-side of a formula (e.g. x1 + x2 + ...). The data argument will be searched if supplied. parametric The right-hand-side of a formula (e.g. x1 + x2 + (1 | g) ...) giving the equation of a parametric form to be used for estimating the mean structure. See the details section below. data An optional data frame or named list containing the response, treatment, and confounders. subset An optional vector using to subset the data. Can refer to data if provided. weights An optional vector of population weights used in model fitting and estimating the treatment effect. Can refer to data if provided. method.rsp A character string specifying which method to use when fitting the response surface and estimating the treatment effect. Options are: "bart" - fit the response surface with BART and take the average of the individual treatment effect estimates, "p.weight" - fit the response surface with BART but compute the treatment effect estimate by using a propensity score weighted sum of individual effects, and "tmle" - as above, but further adjust the individual estimates using the Targeted Minimum Loss based Estimation (TMLE) adjustment. method.trt A character string specifying which method to use when fitting the treatment assignment mechanism, or a vector/matrix of propensity scores. Character string options are: "bart" - fit BART directly to the treatment variable, "glm" - fit a generalized linear model with a binomial response and all confounders added linearly, and "none" - do no propensity score estimation. Cannot be "none" if the response model requires propensity scores. When supplied as a matrix, it should be of dimensions equal to the number of observations times the number of samples used in any response model. estimand A character string specifying which causal effect to target. Options are "ate" - average treatment effect, "att" - average treatment effect on the treated, and "atc" - average treatment effect on the controls. group.by An optional factor that, when present, causes the treatment effect estimate to be calculated within each group. commonSup.rule Rule for exclusion of observations lacking in common support. Options are "none" - no suppression, "sd" - exclude units whose predicted counterfactual standard deviation is extreme compared to the maximum standard deviation under those units' observed treatment condition, where extreme refers to the distribution of all standard deviations of observed treatment conditions, "chisq" - exclude observations according to ratio of the variance of posterior predicted counterfactual to the posterior variance of the observed condition, having a Chi Squared distribution with one degree of freedom under the null hypothesis of have equal distributions. commonSup.cut Cutoffs for commonSup.rule. Ignored for "none", when commonSup.rule is "sd", refers to how many standard deviations of the distribution of posterior variance for counterfactuals an observation can be above the maximum of posterior variances for that treatment condition. When commonSup.rule is "chisq", is the � p value used for rejection of the hypothesis of equal variances. p.scoreAsCovariate A logical such that when TRUE, the propensity score is added to the response model as a covariate. When used, this is equivalent to the 'ps-BART' method described by described by Hahn, Murray, and Carvalho. use.ranef Logical specifying if group.by variable - when present - should be included as a "random" or "fixed" effect. If true, rbart will be used for BART models. Using random effects for treatment assignment mechanisms of type "glm" require that the lme4 package be available. group.effects Logical specifying if effects should be calculated within groups if the group.by variable is provided. Response methods of "tmle" and "p.weight" are such that if group effects are calculated, then the population effect is not provided. keepCall A logical such that when FALSE, the call to bartc is not kept. This can reduce the amount of information printed by summary when passing in data as literals. crossvalidate One of TRUE, FALSE, "trt", or "rsp". Enables code to attempt to estimate the optimal end-node sensitivity parameter. This uses a rudimentary Bayesian optimization routine and can be extremely slow. verbose A logical that when TRUE prints information as the model is fit. seed Optional integer specifying the desired pRNG seed. It should not be needed when running single-threaded - set.seed will suffice, and can be used to obtain reproducible results when multi-threaded. See Reproducibility section of bart2. args.rsp, args.trt, ... Further arguments to the treatment and response model fitting algorithms. Arguments passed to the main function as ... will be used in both models. args.rsp and args.trt can be used to set parameters in a single fit, and will override other values. See glm and bart2 for reference. Details bartc represents a collection of methods that primarily use the Bayesian Additive Regression Trees (BART) algorithm to estimate causal treatment effects with binary treatment variables and continuous or binary outcomes. This requires models to be fit to the response surface (distribution of the response as a function of treatment and confounders, � ( � ( 1 ) , � ( 0 ) ∣ � ) p(Y(1),Y(0)∣X) and optionally for treatment assignment mechanism (probability of receiving treatment, i.e. propensity score, � � ( � = 1 ∣ � ) Pr(Z=1∣X)). The response surface model is used to impute counterfactuals, which may then be adjusted together with the propensity score to produce estimates of effects. Similar to lm, models can be specified symbolically. When the data term is present, it will be added to the search path for the response, treatment, and confounders variables. The confounders must be specified devoid of any "left hand side", as they appear in both of the models.
07-21
Parameters for big model inference torch_dtype (str or torch.dtype, optional) — Override the default torch.dtype and load the model under a specific dtype. The different options are: torch.float16 or torch.bfloat16 or torch.float: load in a specified dtype, ignoring the model’s config.torch_dtype if one exists. If not specified the model will get loaded in torch.float (fp32). "auto" - A torch_dtype entry in the config.json file of the model will be attempted to be used. If this entry isn’t found then next check the dtype of the first weight in the checkpoint that’s of a floating point type and use that as dtype. This will load the model using the dtype it was saved in at the end of the training. It can’t be used as an indicator of how the model was trained. Since it could be trained in one of half precision dtypes, but saved in fp32. A string that is a valid torch.dtype. E.g. “float32” loads the model in torch.float32, “float16” loads in torch.float16 etc. For some models the dtype they were trained in is unknown - you may try to check the model’s paper or reach out to the authors and ask them to add this information to the model’s card and to insert the torch_dtype entry in config.json on the hub. device_map (str or dict[str, Union[int, str, torch.device]] or int or torch.device, optional) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. If we only pass the device (e.g., "cpu", "cuda:1", "mps", or a GPU ordinal rank like 1) on which the model will be allocated, the device map will map the entire model to this device. Passing device_map = 0 means put the whole model on GPU 0. To have Accelerate compute the most optimized device_map automatically, set device_map="auto". For more information about each option see designing a device map. max_memory (Dict, optional) — A dictionary device identifier to maximum memory if using device_map. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. tp_plan (str, optional) — A torch tensor parallel plan, see here. Currently, it only accepts tp_plan="auto" to use predefined plan based on the model. Note that if you use it, you should launch your script accordingly with torchrun [args] script.py. This will be much faster than using a device_map, but has limitations. tp_size (str, optional) — A torch tensor parallel degree. If not provided would default to world size. device_mesh (torch.distributed.DeviceMesh, optional) — A torch device mesh. If not provided would default to world size. Used only for tensor parallel for now. offload_folder (str or os.PathLike, optional) — If the device_map contains any value "disk", the folder where we will offload weights. offload_state_dict (bool, optional) — If True, will temporarily offload the CPU state dict to the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True when there is some disk offload. offload_buffers (bool, optional) — Whether or not to offload the buffers with the model parameters. quantization_config (Union[QuantizationConfigMixin,Dict], optional) — A dictionary of configuration parameters or a QuantizationConfigMixin object for quantization (e.g bitsandbytes, gptq). There may be other quantization-related kwargs, including load_in_4bit and load_in_8bit, which are parsed by QuantizationConfigParser. Supported only for bitsandbytes quantizations and not preferred. consider inserting all such arguments into quantization_config instead. subfolder (str, optional, defaults to "") — In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here. variant (str, optional) — If specified load weights from variant filename, e.g. pytorch_model..bin. variant is ignored when using from_tf or from_flax. use_safetensors (bool, optional, defaults to None) — Whether or not to use safetensors checkpoints. Defaults to None. If not specified and safetensors is not installed, it will be set to False. weights_only (bool, optional, defaults to True) — Indicates whether unpickler should be restricted to loading only tensors, primitive types, dictionaries and any types added via torch.serialization.add_safe_globals(). When set to False, we can load wrapper tensor subclass weights. key_mapping (`dict[str, str], optional) — A potential mapping of the weight names if using a model on the Hub which is compatible to a Transformers architecture, but was not converted accordingly. kwargs (remaining dictionary of keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. 详细解释一下上面这个
07-25
(repo-venv) user@user-HP-Pavilion-Gaming-Desktop-690-07xx:~/Code/Polus/VSB-0804$ repo init -u ssh://gerrit6.labcollab.net:9418/vendor/qualcomm/yocto/echo/qcom-yocto-manifest -b vega/echo/mainline --repo-url=ssh://gerrit6.labcollab.net:9418/amazon/repo --repo-branch=stable Traceback (most recent call last): File "/home/user/Code/Polus/VSB-0804/.repo/repo/main.py", line 54, in <module> from subcmds.version import Version File "/home/user/Code/Polus/VSB-0804/.repo/repo/subcmds/__init__.py", line 34, in <module> mod = __import__(__name__, File "/home/user/Code/Polus/VSB-0804/.repo/repo/subcmds/help.py", line 20, in <module> from formatter import AbstractFormatter, DumbWriter File "/home/user/Code/Polus/VSB-0804/.repo/repo/formatter.py", line 327 print "new_alignment(%r)" % (align,) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)? (repo-venv) user@user-HP-Pavilion-Gaming-Desktop-690-07xx:~/Code/Polus/VSB-0804$ cat .repo/repo/formatter.py """Generic output formatting. Formatter objects transform an abstract flow of formatting events into specific output events on writer objects. Formatters manage several stack structures to allow various properties of a writer object to be changed and restored; writers need not be able to handle relative changes nor any sort of ``change back'' operation. Specific writer properties which may be controlled via formatter objects are horizontal alignment, font, and left margin indentations. A mechanism is provided which supports providing arbitrary, non-exclusive style settings to a writer as well. Additional interfaces facilitate formatting events which are not reversible, such as paragraph separation. Writer objects encapsulate device interfaces. Abstract devices, such as file formats, are supported as well as physical devices. The provided implementations all work with abstract devices. The interface makes available mechanisms for setting the properties which formatter objects manage and inserting data into the output. """ import sys AS_IS = None class NullFormatter: """A formatter which does nothing. If the writer parameter is omitted, a NullWriter instance is created. No methods of the writer are called by NullFormatter instances. Implementations should inherit from this class if implementing a writer interface but don't need to inherit any implementation. """ def __init__(self, writer=None): if writer is None: writer = NullWriter() self.writer = writer def end_paragraph(self, blankline): pass def add_line_break(self): pass def add_hor_rule(self, *args, **kw): pass def add_label_data(self, format, counter, blankline=None): pass def add_flowing_data(self, data): pass def add_literal_data(self, data): pass def flush_softspace(self): pass def push_alignment(self, align): pass def pop_alignment(self): pass def push_font(self, x): pass def pop_font(self): pass def push_margin(self, margin): pass def pop_margin(self): pass def set_spacing(self, spacing): pass def push_style(self, *styles): pass def pop_style(self, n=1): pass def assert_line_data(self, flag=1): pass class AbstractFormatter: """The standard formatter. This implementation has demonstrated wide applicability to many writers, and may be used directly in most circumstances. It has been used to implement a full-featured World Wide Web browser. """ # Space handling policy: blank spaces at the boundary between elements # are handled by the outermost context. "Literal" data is not checked # to determine context, so spaces in literal data are handled directly # in all circumstances. def __init__(self, writer): self.writer = writer # Output device self.align = None # Current alignment self.align_stack = [] # Alignment stack self.font_stack = [] # Font state self.margin_stack = [] # Margin state self.spacing = None # Vertical spacing state self.style_stack = [] # Other state, e.g. color self.nospace = 1 # Should leading space be suppressed self.softspace = 0 # Should a space be inserted self.para_end = 1 # Just ended a paragraph self.parskip = 0 # Skipped space between paragraphs? self.hard_break = 1 # Have a hard break self.have_label = 0 def end_paragraph(self, blankline): if not self.hard_break: self.writer.send_line_break() self.have_label = 0 if self.parskip < blankline and not self.have_label: self.writer.send_paragraph(blankline - self.parskip) self.parskip = blankline self.have_label = 0 self.hard_break = self.nospace = self.para_end = 1 self.softspace = 0 def add_line_break(self): if not (self.hard_break or self.para_end): self.writer.send_line_break() self.have_label = self.parskip = 0 self.hard_break = self.nospace = 1 self.softspace = 0 def add_hor_rule(self, *args, **kw): if not self.hard_break: self.writer.send_line_break() self.writer.send_hor_rule(*args, **kw) self.hard_break = self.nospace = 1 self.have_label = self.para_end = self.softspace = self.parskip = 0 def add_label_data(self, format, counter, blankline = None): if self.have_label or not self.hard_break: self.writer.send_line_break() if not self.para_end: self.writer.send_paragraph((blankline and 1) or 0) if isinstance(format, str): self.writer.send_label_data(self.format_counter(format, counter)) else: self.writer.send_label_data(format) self.nospace = self.have_label = self.hard_break = self.para_end = 1 self.softspace = self.parskip = 0 def format_counter(self, format, counter): label = '' for c in format: if c == '1': label = label + ('%d' % counter) elif c in 'aA': if counter > 0: label = label + self.format_letter(c, counter) elif c in 'iI': if counter > 0: label = label + self.format_roman(c, counter) else: label = label + c return label def format_letter(self, case, counter): label = '' while counter > 0: counter, x = divmod(counter-1, 26) # This makes a strong assumption that lowercase letters # and uppercase letters form two contiguous blocks, with # letters in order! s = chr(ord(case) + x) label = s + label return label def format_roman(self, case, counter): ones = ['i', 'x', 'c', 'm'] fives = ['v', 'l', 'd'] label, index = '', 0 # This will die of IndexError when counter is too big while counter > 0: counter, x = divmod(counter, 10) if x == 9: label = ones[index] + ones[index+1] + label elif x == 4: label = ones[index] + fives[index] + label else: if x >= 5: s = fives[index] x = x-5 else: s = '' s = s + ones[index]*x label = s + label index = index + 1 if case == 'I': return label.upper() return label def add_flowing_data(self, data): if not data: return prespace = data[:1].isspace() postspace = data[-1:].isspace() data = " ".join(data.split()) if self.nospace and not data: return elif prespace or self.softspace: if not data: if not self.nospace: self.softspace = 1 self.parskip = 0 return if not self.nospace: data = ' ' + data self.hard_break = self.nospace = self.para_end = \ self.parskip = self.have_label = 0 self.softspace = postspace self.writer.send_flowing_data(data) def add_literal_data(self, data): if not data: return if self.softspace: self.writer.send_flowing_data(" ") self.hard_break = data[-1:] == '\n' self.nospace = self.para_end = self.softspace = \ self.parskip = self.have_label = 0 self.writer.send_literal_data(data) def flush_softspace(self): if self.softspace: self.hard_break = self.para_end = self.parskip = \ self.have_label = self.softspace = 0 self.nospace = 1 self.writer.send_flowing_data(' ') def push_alignment(self, align): if align and align != self.align: self.writer.new_alignment(align) self.align = align self.align_stack.append(align) else: self.align_stack.append(self.align) def pop_alignment(self): if self.align_stack: del self.align_stack[-1] if self.align_stack: self.align = align = self.align_stack[-1] self.writer.new_alignment(align) else: self.align = None self.writer.new_alignment(None) def push_font(self, font): size, i, b, tt = font if self.softspace: self.hard_break = self.para_end = self.softspace = 0 self.nospace = 1 self.writer.send_flowing_data(' ') if self.font_stack: csize, ci, cb, ctt = self.font_stack[-1] if size is AS_IS: size = csize if i is AS_IS: i = ci if b is AS_IS: b = cb if tt is AS_IS: tt = ctt font = (size, i, b, tt) self.font_stack.append(font) self.writer.new_font(font) def pop_font(self): if self.font_stack: del self.font_stack[-1] if self.font_stack: font = self.font_stack[-1] else: font = None self.writer.new_font(font) def push_margin(self, margin): self.margin_stack.append(margin) fstack = filter(None, self.margin_stack) if not margin and fstack: margin = fstack[-1] self.writer.new_margin(margin, len(fstack)) def pop_margin(self): if self.margin_stack: del self.margin_stack[-1] fstack = filter(None, self.margin_stack) if fstack: margin = fstack[-1] else: margin = None self.writer.new_margin(margin, len(fstack)) def set_spacing(self, spacing): self.spacing = spacing self.writer.new_spacing(spacing) def push_style(self, *styles): if self.softspace: self.hard_break = self.para_end = self.softspace = 0 self.nospace = 1 self.writer.send_flowing_data(' ') for style in styles: self.style_stack.append(style) self.writer.new_styles(tuple(self.style_stack)) def pop_style(self, n=1): del self.style_stack[-n:] self.writer.new_styles(tuple(self.style_stack)) def assert_line_data(self, flag=1): self.nospace = self.hard_break = not flag self.para_end = self.parskip = self.have_label = 0 class NullWriter: """Minimal writer interface to use in testing & inheritance. A writer which only provides the interface definition; no actions are taken on any methods. This should be the base class for all writers which do not need to inherit any implementation methods. """ def __init__(self): pass def flush(self): pass def new_alignment(self, align): pass def new_font(self, font): pass def new_margin(self, margin, level): pass def new_spacing(self, spacing): pass def new_styles(self, styles): pass def send_paragraph(self, blankline): pass def send_line_break(self): pass def send_hor_rule(self, *args, **kw): pass def send_label_data(self, data): pass def send_flowing_data(self, data): pass def send_literal_data(self, data): pass class AbstractWriter(NullWriter): """A writer which can be used in debugging formatters, but not much else. Each method simply announces itself by printing its name and arguments on standard output. """ def new_alignment(self, align): print "new_alignment(%r)" % (align,) def new_font(self, font): print "new_font(%r)" % (font,) def new_margin(self, margin, level): print "new_margin(%r, %d)" % (margin, level) def new_spacing(self, spacing): print "new_spacing(%r)" % (spacing,) def new_styles(self, styles): print "new_styles(%r)" % (styles,) def send_paragraph(self, blankline): print "send_paragraph(%r)" % (blankline,) def send_line_break(self): print "send_line_break()" def send_hor_rule(self, *args, **kw): print "send_hor_rule()" def send_label_data(self, data): print "send_label_data(%r)" % (data,) def send_flowing_data(self, data): print "send_flowing_data(%r)" % (data,) def send_literal_data(self, data): print "send_literal_data(%r)" % (data,) class DumbWriter(NullWriter): """Simple writer class which writes output on the file object passed in as the file parameter or, if file is omitted, on standard output. The output is simply word-wrapped to the number of columns specified by the maxcol parameter. This class is suitable for reflowing a sequence of paragraphs. """ def __init__(self, file=None, maxcol=72): self.file = file or sys.stdout self.maxcol = maxcol NullWriter.__init__(self) self.reset() def reset(self): self.col = 0 self.atbreak = 0 def send_paragraph(self, blankline): self.file.write('\n'*blankline) self.col = 0 self.atbreak = 0 def send_line_break(self): self.file.write('\n') self.col = 0 self.atbreak = 0 def send_hor_rule(self, *args, **kw): self.file.write('\n') self.file.write('-'*self.maxcol) self.file.write('\n') self.col = 0 self.atbreak = 0 def send_literal_data(self, data): self.file.write(data) i = data.rfind('\n') if i >= 0: self.col = 0 data = data[i+1:] data = data.expandtabs() self.col = self.col + len(data) self.atbreak = 0 def send_flowing_data(self, data): if not data: return atbreak = self.atbreak or data[0].isspace() col = self.col maxcol = self.maxcol write = self.file.write for word in data.split(): if atbreak: if col + len(word) >= maxcol: write('\n') col = 0 else: write(' ') col = col + 1 write(word) col = col + len(word) atbreak = 1 self.col = col self.atbreak = data[-1].isspace() def test(file = None): w = DumbWriter() f = AbstractFormatter(w) if file is not None: fp = open(file) elif sys.argv[1:]: fp = open(sys.argv[1]) else: fp = sys.stdin for line in fp: if line == '\n': f.end_paragraph(1) else: f.add_flowing_data(line) f.end_paragraph(0) if __name__ == '__main__': test()
08-06
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值