Powershell remoting - Policy does not allow the delegation of user credentials |
I finally got it to work thanks to this page. It provides a script that
sets the required credential delegation policies by setting the appropriate
registry keys directly. Once I ran that script with admin privileges, I was
able to successfully establish a CredSSP connection to myserver:
Enable-WSManCredSSP -Role client -DelegateComputer *.mydomain.com
$allowed = @('WSMAN/*.mydomain.com')
$key = 'hklm:SOFTWAREPoliciesMicrosoftWindowsCredentialsDelegation'
if (!(Test-Path $key)) {
md $key
}
New-ItemProperty -Path $key -Name AllowFreshCredentials -Value 1
-PropertyType Dword -Force
$key = Join-Path $key 'AllowFreshCredentials'
if (!(Test-Path $key)) {
md $key
}
$i = 1
$allowed |% {
# Script does not take into account existing entries in this key
New-ItemProper
|
How to set Group Policy "Turn Off Automatic Root Certificates Update" vie Registry/Powershell? |
Domain policies override local settings. That's how they're supposed to
work (they'd be rather useless otherwise). If you want the policy disabled,
disable or remove the policy in Group Policy Management or remove the
computer from the domain.
|
unable to understand write policy in Cache memory |
When any block of data is brought from disk to cache, it means that the
cache is holding a duplicate copy of the data on the disk. So, when there
is a write operation on a block of data in the cache, then the data on that
block has changed in the cache, and it should change in the disk storage as
well.
In write-through cache, when there is a write on a block in the cache, then
that write is implemented in the disk storage as soon as the data in the
cache has changed. While this is a simpler approach, it does have a lot of
overhead, because for every such write-through operation, there is a
context-switch and a virtual to real memory address translation taking
place, in addition to the time taken to write the block to disk.
However, in write-back cache policy, the writes occurring on ca
|
Serialize execution of symstore via Powershell or BATCH |
Use a file in the shared directory as semaphore to avoid concurrent
executions.
:checkfile
if exist %cidir%sem.txt goto :wait10secs
echo gotit! >%cidir%sem.txt
doit
del %cidir%sem.txt
goto :eof
:wait10secs
ing 192.0.2.2 -n 1 -w 10000 > nul
goto :checkfile
Be prepared for debugging all strange ways your batch can fail and all
nasty racing conditions.
|
Powershell (Version 2.0) remote execution of services with credentials |
Get-WMIObject accepts the -Credential parameter. You shouldn't be keeping
your credentials in plain text in your script, so you'll want to prompt for
them.
$creds = get-credential;
(Get-WmiObject -Computer myCompName Win32_Service -Filter
"Name='myServiceName'" -credential
$creds).InvokeMethod("Stop-Service",$null)
If you have PSRemoting enabled on the remote system, you can do this
without WMI.
$creds = get-credential;
Invoke-Command -computername myCompName -credential $creds -scriptblock
{(get-service -name myServiceName).Stop()};
Update based on comments
Since you're running this as a scheduled job, you should not be storing or
prompting for credentials at all. Configured the scheduled job itself (via
Scheduled Tasks) to run under the required user account, then either of the
|
Unable to connect to Windows Azure from Powershell Azure Powershell module |
To see is there some result from Import-AzurePublishSettingsFile you can
call:
Get-AzureSubscription
By default it select the default to current subscription. If you don't see
any result then you don't import your subscription settings right.
This Introduction to Windows Azure PowerShell also can help you.
|
Alternatives to fix powershell error "execution of scripts is disabled on this system" |
If you did what the instuctions told you to do you would have gotten a help
page that would have told you exactly what you need to do.
get-help about_signing
In summary, the computer has no way to tell that whoever wrote the script
was a trustworthy person, so by default it does not run any untrustworthy
scripts. The two ways to fix this is either allow scripts from unknown
sources (the solution you found out about by using Set-ExecutionPolicy
Unrestricted) or by "signing" the script proving it came from a trustworthy
source and has not been tampered with seance you got it from that source.
To sign your own code you will need a code signing certificate, read that
about_signing help and there is a section called CREATE A SELF-SIGNED
CERTIFICATE that tells you how to do it.
After you ha
|
Remote Powershell Access denied for certain dll's execution for Sharepoint 2013 |
You need to check CredSSP authentication. Remote PowerShell execution with
SharePoint fails as the second hop translates the credentials to system
credentials. If the task involves querying or updating DB server, it will
fail as SYSTEM account will not have access the remote PowerShell on SQL
Server. You need to enable CredSSP.
Check this blog post I wrote a while ago. This is not specific to
SharePoint but it should apply to your scenario as well.
http://www.ravichaganti.com/blog/?p=1230
|
Powershell asking for confirmation before executing the code in allsigned execution mode |
PowerShell Help about execution policies easily found by PS C:>help
about_Execution_Policies shows that in AllSigned mode it will Prompt you
before running scripts from Publishers that you have not yet classified as
trusted or untrusted. You can try RemoteSigned or try this wonderfully
explainedby Scott Hanselman
http://www.hanselman.com/blog/SigningPowerShellScripts.aspx
Signed scripts can be transported by exporting (from original
computer) and importing (to the new computer) the Powershell
certificates found in the Trusted Root Certification Authorities
container. Optionally, the Trusted Publishers can also be moved to
prevent the first-time prompt.
Final note on the blog
Note that Powershell will prompt you the first time it’s run unless
you also import the T
|
Unable to stop execution onclick of cancel button |
Your return statement in showMessage() is executed long after
ajaxEditFunctionCall() has completed because the ajax is asyncronous.
Suggested solution would be to always return false in the onclick. In the
ajax call back, check the response and use Javascript to submit the form if
required:
function ajaxEditFunctionCall() {
...
...
return false;
}
function showMessage() {
if (xmlHttp.readyState == 4) {
if (xmlHttp.status == 200) {
var r = confirm(xmlHttp.responseText);
if (r) {
document.forms["my_form_name"].submit();
}
}
}
}
|
Unable to stop the execution of media player in the middle |
Please try the following :
create your media player on the on create and declare it as a member of
your ACTIVITY :
MediaPlayer back_music;
public class MainACtivity(){
....
....
onCreate(){
back_music= MediaPlayer.create(getBaseContext(), R.raw.sher_khan);
.....
}
then delete the the row "MediaPlayer back_music =
MediaPlayer.create(getBaseContext(), R.raw.sher_khan);"
and give it a another shot :)
hope it helps you .
|
How does linux process scheduling policy relate to thread scheduling policy? |
Linux does not support process scheduling at all. Scheduling is entirely on
a thread basis. The sched_* functions incorrectly modify the thread
scheduling parameters of the target thread id instead of the scheduling
parameters of a process. See:
http://sourceware.org/bugzilla/show_bug.cgi?id=14829 and
http://sourceware.org/bugzilla/show_bug.cgi?id=15088
|
"Git Log" command in Powershell - unable to terminate process |
It puts you in a pager (most probably less if you installed MSysGit or
Github for Windows), because the output you requested is longer than one
screen page.
You can scroll up/down/left/right with your arrow keys, the Page Up/Page
Down keys and the J/K/H/L keys.
To show inline help, press ? and to quit, press Q.
You can use a different pager or turn it off if you want to. As man git
config points out, you can use the core.pager setting to set it to a
different pager, or set its value to cat to disable pagination for all Git
commands.
|
Powershell - Unable to automatically pass credentials to gmail.com |
The first error can be solved at least two different ways: Using a named
parameter when invoking the Login-Gmail function or by decorating the
$uname parameter with positional information:
Named Parameter:
Login-Gmail -uname me@gmail.com
Adding the positional information:
function Login-GMail{
param(
[Parameter(Mandatory=$True,Position=1)]
[string]$uname,
[Parameter(Mandatory=$False,Position=2)]
[string]$url="http://www.gmail.com",
[Parameter(Mandatory=$False,Position=3)]
[bool]$cookie=$false
)
Your second error regarding the System.Management.Automation.PSCredential
is that you are trying to assign the value of an object (the PSCredential)
when the IE DOM expects a string. You can probably just omit the call to
get-credential entirely and just assign
|
Unable to access UNC Paths in Powershell remote session |
You've really got 3 different things going on here.
1 & 3. Drives are only mapped when you log on interactively. So when
you remoted into the other computer, mapped a drive, and then logged
off/disconnected, that mapped drive was disconnected. Except in interactive
GUI user sessions, you cannot depend upon a mapped drive letter that you
don't create yourself. Within scripts or any remote session, just use UNC
paths for everything - it's more reliable.
2 . When you attempt to map the drive in the remote PS session, you're
encountering what's known as the "double hop" problem. There is a solution
to this, but there's extra setup you have to do. See
http://blogs.msdn.com/b/clustering/archive/2009/06/25/9803001.aspx and
Double hop access to copy files without CredSSP
|
Entity Framework: LINQ query generates different SQL between local execution and server execution |
So finally, the problem was the Framework version. I thought it was the
same version of .NET but it was not: 4.0.30319.1 locally and
4.0.30319.17929 remotely. And it seems that 4.0.30319.17929 is .NET
Framework 4.5, so it is more than just a different build. I have
uninstalled the version 4.5, and reinstalled the 4.0 on the server It is
strange because it reinstalled it in folder
C:WindowsMicrosoft.NETFrameworkv4.0.30319 but the file versions are now
correct, 4.0.30319.1 (file versions were 4.0.30319.17929 before) Then I
have changed the .NET version in the IIS application pool. It has been
reset to version 2.0 after uninstalling, so I reset to version 4 and
restart the pool (but it is still displaying version v4.0.30319 in the
application pool...). And now it works like locally, it does o
|
How do I see a Unix Job's performance (execution time and cpu resource) after execution? |
You can use linux profile tool perf, eg:
perf stat ls
In my computer:
Performance counter stats for 'ls':
2.066571 task-clock # 0.804 CPUs utilized
1 context-switches # 0.000 M/sec
0 CPU-migrations # 0.000 M/sec
267 page-faults # 0.129 M/sec
2,434,744 cycles # 1.178 GHz
[57.78%]
1,384,929 stalled-cycles-frontend # 56.88% frontend cycles idle
[52.01%]
1,035,939 stalled-cycles-backend # 42.55% backend cycles idle
[98.96%]
1,894,339 instructions # 0.78 insns per cycle
# 0.73 s
|
Why would installing Azure SDK 2.1 or powershell 3 on the build server break some of our tests which run in powershell? |
Have you tried passing a fully qualified path to startInfo.FileName instead
of just Powershell.exe incase there's a problem with %PATH% since the
update
Complete total guess, but any chance your running into FileSystemRedirector
32 vs. 64 issues as seen in this answer - Process.Start(): The system
cannot find the file specified, but my file path seems to be legit
|
Add Powershell Snapin for Powershell Module and Import Multiple Times |
You might want to try to specify this module required by your own module
through a module manifest (.psd1). See RequiredModules here.
|
Executing Powershell script as different User in Exchange 2007 Powershell |
I just found out that executing Remote Powershell Commands/Skripts is not
supported with Exchange 2007
(http://howexchangeworks.com/2009/11/exchange-2007-sp2-supports-powershell.html).
So I need to wait until the upgrade to 2013.
Some workarounds:
http://social.technet.microsoft.com/Forums/en-US/exchangesvrgeneral/thread/4596035a-cede-4541-8b8e-e2e9bf1b40dc
Or:
http://peerfect.blogspot.co.at/2012/10/re-blog-of-my-exchange-remote.html
|
PowerShell Community Extensions not Recognized by TeamCity PowerShell Runner |
Do you import the PSCX module in your script? PowerShell v3 will cache the
module info after you have done this once so you don't need to import it
again. However if TeamCity is running the 64-bit console and you normally
run the 32-bit console, then the 64-bit console wouldn't have PSCX commands
in the command cache. Anyway, it is a good practice to have your scripts
explicitly require the modules it depends upon e.g.
#requires -Modules Pscx
|
Yii, how do I end Yii app execution without ending PHP script execution |
End Yii, with proper cleanup and without exiting the request. As shown on
http://www.yiiframework.com/doc/api/1.1/CApplication#end-detail, it is done
like so.
Yii::app()->end(0, false);
|
How to stop for loop execution untill DWR method execution completion which is inside the for loop |
You can use "asynchronous pseudo-recursion" instead of a for loop, with the
general pattern that I use being:
var pods = [ ... ];
(function loop() {
if (pods.length) {
var pod = pods.shift(); // take (and remove) first element
// do something with "pod"
...
// recurse - put this inside a finish callback if "do something" is
async
loop();
}
})(); // closing braces start invocation immediately
where in your case, the call to loop should be the last thing inside your
callback function.
NB: This pattern can also be used to avoid the "browser not responding"
error seen with longer running non-async operations by replacing the call
to loop() with setTimeout(loop, 0).
This use of setTimeout turns a synchronous recursive function into
|
Getting "Skipping JaCoCo execution due to missing execution data file" upon executing JaCoCo? |
The execution says it's putting the jacoco data in
/Users/davea/Dropbox/workspace/myproject/target/jacoco.exec but your maven
configuration is looking for the data in
${basedir}/target/coverage-reports/jacoco-unit.exec.
|
How to use powershell variable in PowerShell.AddParameter method in C#? |
The variable won't be available until you invoke the first script. Try
this:
ps.AddScript("$backupFile =
[System.IO.Path]::Combine([System.IO.Path]::GetTempPath(),'{0}.bak')".FormatInvariant(databaseName));
ps.Invoke();
ps.Commands.Clear();
var backupFile =
ps.Runspace.SessionStateProxy.PSVariable.Get("backupFile");
ps.AddCommand("New-Item")
.AddParameter("Force")
.AddParameter("ItemType", "File")
.AddParameter("Path", backupFile ));
ps.Invoke();
If you go this route though, I don't think you can use the RunspacePool
because you're likely to get different runspaces between each Invoke(). In
that case, the variable won't be available to the other runspace. Do you
really need to use the RunspacePool in this scenario? If you do then why
not just do the first bit in C#:
var backup
|
How do I return value or object from C# powershell command to Powershell |
You have to use the CmdLet.WriteObject method.
Here a good explanation from @RomanKuzmin.
|
Powershell Script that will login into powershell for office 365 |
For the exchange powershell use:
$SecPass1 = convertto-securestring -asplaintext -string "PasSword" -force
$MSOLM = new-object System.Management.Automation.PSCredential -argumentlist
"GlobalAdmin@some.com",$SecPass1
$Session = New-PSSession -ConfigurationName Microsoft.Exchange
-ConnectionUri https://ps.outlook.com/powershell/ -Credential $MSOLM
-Authentication Basic -AllowRedirection
Import-Module MSOnline
Import-PSSession $Session –AllowClobber
|
Simple DB policy being ignored? |
Ok figured it out.
You have to set the region endpoint on your call to the service from the
client.
So
var simpleDBClient = new AmazonSimpleDBClient(iamkey.AccessKeyId,
iamkey.SecretAccessKey, iamkey.SessionToken,
Amazon.RegionEndpoint.EUWest1);
|
How can I add a privacy policy? |
The way I understand it is that you have to use the Privacy policy field in
the Description step of the the app. Have you tried that? And it has to be
online. Don't think I was clear about that, but the policy should b online.
Leave a note to the testers as to where the policy is, I've heard about
apps getting rejected that didn't do that, even though they had the url
After all the comments and discussionn on twitter and fb I wrote up a small
blog post on how to do this, basically what I wrote here,plus where you can
host the policy. Not meant as a way to get traffic to the blog.
|
OpenAM Policy Enforcement |
Set debug level to 'message' in the agent profile first. Look into the
agent debug log so that you really see which request the agent gets.
In general what you need is possible, just a matter of proper
configuration.
-Bernhard
|
C++ |
std::string data = c;
is only good if the string is surely 0-terminated
std::string raw = data.substr(0, bits);
you could do that simpler
const std::string raw(c, c+bits);
in your policy function there's a char c for no reason, but if it had value
>0, would likely cause problems.
And most importantly, sending sizeof(Env::Policy()) bytes makes no sense at
all, you shall send the whole string!
const auto& policy = Env::Policy();
send(this->s, policy, policy.size() + 1, 0);
maybe without +1, depending if you want the 0.
|
Who can exclude Same origin policy? |
It is up to the server (facebook, google, etc) to allow their content to be
loaded across domains. This is called Cross-Origin Resource Sharing. To
enable CORS on your server, provide this header in your response:
Access-Control-Allow-Origin: *.
You cannot change the behavior of a server you do not own.
|
Same Origin Policy - Displaying Ads |
It is a "SOP" issue. But as far as I know, there are elegant ways to
implement advertisments without facing this problem. Just in addition to
T.J. Crowders advice to ask your ad broker for correct implementation, you
might find this interesting:
http://code.google.com/p/browsersec/wiki/Part2#Life_outside_same-origin_rules
|
Restkit Cache policy 20.x |
You can create RKManagedObjectRequestOperation with NSMutableURLRequest and
set request.cachePolicy:
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:[NSURL
URLWithString:path relativeToURL:self.baseURL]];
request.cachePolicy = NSURLRequestReloadIgnoringLocalAndRemoteCacheData;
RKManagedObjectRequestOperation *operation =
[[RKManagedObjectRequestOperation alloc] initWithRequest:request
responseDescriptors:[RKObjectManager sharedManager].responseDescriptors];
operation.managedObjectContext = [[RKManagedObjectStore defaultStore]
newChildManagedObjectContextWithConcurrencyType:NSPrivateQueueConcurrencyType
tracksChanges:YES];
operation.managedObjectCache = [RKManagedObjectStore
defaultStore].managedObjectCache;
[operation setCompletionBlockWithSuccess:success failure:fai
|
JAAS Policy in Eclipse RCP |
Did not succeeded to make JAAS Policy work properly in RCP. Finished with
dirty hack workaround: throwing AccessControlException right from the
place, where Policy returns false.
Example:
public class MyPolicy extends java.security.Policy {
public boolean implies(ProtectionDomain domain, Permission permission)
{
...
System.out.println("deny all");
throw new AccessControlException("Access denied");
return false;
}
}
|
RESTful API to get around origin policy |
https://jsonp.jit.su/
That said, please consider very carefully whether you really want to do
this. The same origin policy exists for a good reason.
|
AWS S3 Policy for authenticated users |
I think you could do this with ACLs. Change the permissions on your bucket
to requiring ACL 'authenticated-read'. Your users will have to set that ACL
flag on their uploads or they'll get an accessed denied error, but if you
can get them to set that flag I think this may work for you.
Editing Bucket Permissions
|
Odd sizing policy behavior |
"minimum" means that the widget must have the given size or more (the given
size is a minimum), while "maximum" means that the given size is an upper
limit, so the behaviour you observe is consitent with semantics.
I would set the spinbox policy to "expanding" and the label to "preferred".
|
Powershell Start-Process to start Powershell session and pass local variables |
I'm pretty sure there's no direct way to pass variables from one PowerShell
session to another. The best you can do is some workaround, like declaring
the variables in the code you pass in -ArgumentList, interpolating the
values in the calling session. How you interpolate the variables into the
declarations in -ArgumentList depends on what types of variables. For an
array and a string you could do something like this:
$command = '<contents of your scriptblock without the curly braces>'
Start-Process powershell -ArgumentList ("`$Array = echo $Array; `$String =
'$String';" + $command)
|
Versioning policy of homebrew formula? |
Homebrew doesn't support this (yet), what other packaging systems might
call a revision. Either you create a fake version number, as you propose,
but that isn't recommended for public packages. Or you manually uninstall
and reinstall the package.
|