Archives

All posts by peter

Dynamics CRM JavaScript loading has changed drastically through the years. Currently in D365 9.x you’re able to specify the Dependent JavaScripts in the interface, and it’s supposed to asynchronously load it. Sometimes, this works great. For some reason, on some forms, and some seemingly random times, it will just not load the dependent script and you’ll get an error. I’ve not been able to replicate why it happens, but I have had to make a really obtuse workaround for the time being, and I figure I’d share it — even though it’s a bad idea and is probably a symptom of an underlying bug in the platform or our instances.

If you want to load another JavaScript using the built-in D365 jQuery, you can follow this pattern in your onLoad script on the Form Script. You can replace the “formscriptslib.js” with whichever script you’re trying to load. The “magic” is in the eval command which runs the script after $.getScript/$.ajax starts it. Otherwise it doesn’t appear to be executing at all.

//Ticket.js
function onLoad() {
var runOnLoad = function() {
console.log("put your onLoad scripts in here");
};
var loadOnLoad = function() {
if (typeof util === "undefined") {
console.log("Form Scripts FAILED to load. Loading manually.");
loadScripts(runOnLoad);
} else {
console.log("Form Scripts Loaded Correctly. Continuing to run OnLoad");
runOnLoad();
}
if (typeof util === "undefined") {
console.log("if it's still not loaded, run this script again");
setTimeout(loadOnLoad, 1000);
}
};
loadOnLoad();
}

function loadScripts(fOnLoad) {
if (typeof window.$ === 'undefined' || typeof window.$.when === 'undefined') {
window.$ = parent.$;
}
window.parent.$.ajaxSetup({
cache: true
});
window.parent.$.loadScript = function(url, callback) {
window.parent.$.ajax({
url: url,
dataType: 'script',
success: callback,
async: false //load synchronously
});
};
var urlOfScript = function(jsFile) {
var scriptElements = document.getElementsByTagName('script');
var i, element, myfile;
for (i = 0; element = scriptElements[i]; i++) {
myfile = element.src;
if (myfile.indexOf(jsFile) >= 0) {
var myurl = myfile.substring(0, myfile.indexOf(jsFile));
}
}
return myurl;
};
var directoryOfScript = function(jsFile) {
var fullUrl = urlOfScript(jsFile);
var returned = fullUrl.substring(fullUrl.indexOf('.com') + 4, fullUrl.indexOf('Form')); //set the "Form" value to whichever location you're putting scripts for Forms
console.log(returned);
return returned;
};
var formDirm = directoryOfScript('Ticket.js');

$.when(
$.getScript(formDirm + "/formscriptslib.js"),
$.Deferred(function(deferred) {
$(deferred.resolve);
})
).done(function() {
if (typeof util === "undefined") {
eval(arguments[0][0]);
}
if (fOnLoad) fOnLoad();
console.log("place your code here, the scripts are all loaded");

});
}

If you have added a custom Enable Rule to the Application Ribbon using a custom JavaScript already (Show Global Notification on Load of Model-driven App in Dynamics 365 [Linn’s Power Platform Notebook]), it’s easy to add a function to dynamically change the text at the top based on the URL host name. If a new update of D365 changes the underlying DOM that this affects, you will have to modify the JavaScript to match, so this is an unsupported change.

//call this from an Enable Rule
function dynamicallySetColor() {
//set the NAV bar color - https://www.rapidtables.com/web/color/RGB_Color.html
LZW = LZW || {};
var setColorByEnv = function setColorByEnv(br, bg, bb, fr, fg, fb) {
//Background RGB: br, bg, bb
//Font RGB: fr, fg, fb
//unsupported change
if (fr === null || fg === null || fb === null) {
//just set as white
fr = 255;
fg = 255;
fb = 255;
}
LZW.colorCustomized = true;
console.log("Setting color");
var style = window.parent.document.createElement('style');
style.innerHTML = `.pa-v {
background-color: rgb(` + br + ',' + bg + ',' + bb + `) !important;
}
.pa-k {
color: rgb(` + fr + ',' + fg + ',' + fb + `) !important;
}
`;
window.parent.document.head.appendChild(style);
}
var setTitleBar = function setTitleBar(newText) {
try { //unsupported change
var parentDiv = window.parent.document.getElementById("id-19");
if (parentDiv) {
var spans = parentDiv.getElementsByTagName("span");
if (spans && spans.length === 1) {
var theSpan = spans[0];
if (theSpan.innerHTML.indexOf('SANDBOX') !== -1) { //only replace this on SANDBOX
theSpan.innerHTML = newText;
}
}
}
[0].innerHTML = newText;
} catch (e) {
console.log(e);
}
}
try {
var thisDn = window.parent.location.host;
if (thisDn.indexOf("-dev") !== -1) {
setTitleBar("DEVELOPMENT ENVIRONMENT");
setColorByEnv(0, 100, 0, 0, 0, 0); //dark green with black font
} else if (thisDn.indexOf("-qa") !== -1) {
setTitleBar("QA ENVIRONMENT");
setColorByEnv(128, 0, 0, 255, 255, 255); //maroon/white
} else if (thisDn.indexOf("-int") !== -1) {
setTitleBar("TRAINING ENVIRONMENT");
setColorByEnv(210, 105, 30, 0, 0, 0); //Chocolate/black
} else if (thisDn.indexOf("-preprod") !== -1) {
setTitleBar("PREPRODUCTION ENVIRONMENT");
setColorByEnv(0, 128, 128, 0, 0, 0); //Teal/black
}
} catch (e) {
console.log(e);
}
}

When setting up EdgeOS with PPPoE (Centurylink) with Hairpin NAT, ensure that the Port Forward Source on the LAN interface is switch0, even if eth1 is the only one being used. Not sure what switch0 means (one would think it is the combination of eth1-4), but this appears to have fixed an issue with not allowing access to custom (>1024) ports.

When trying to do a git clone on a new installation of git behind a corporate firewall (with MITM on SSL), I got this error:
# git clone https://github.com/zptaylor/public-repo.git
Cloning into 'public-repo'...
fatal: unable to access 'https://github.com/zptaylor/public-repo.git/': SSL certificate problem: self signed certificate in certificate chain

First I tried switching the backend to sslchannel, but that threw a different error:
# git config --global http.sslbackend schannel
# git clone https://github.com/zptaylor/public-repo.git
Cloning into 'public-repo'...
fatal: unable to access 'https://github.com/zptaylor/public-repo.git/': schannel: next InitializeSecurityContext failed: Unknown error (0x80092012) - The revocation function was unable to check revocation for the
certificate.

Of course the reason no SSL validation will work is because the SSL is for all intents and purposes invalid, so the easiest way is to just turn it off, although it completely eliminates https verification:
# git config --global http.sslbackend openssl
# http.sslVerify= false

A similar solution that breaks security, but for schannel:
# git config --global http.schannelCheckRevoke "false"
# git config --global http.sslbackend schannel

A more precise fix that would allow SSL would be to use openssl, create some sort of trust store of valid certs, and then add the MITM’s cert for github.com in its place. I think these fixes are reasonable for a developer machine who is on a controlled network that could not possibly allow github.com to be spoofed.

Read up more at
https://stackoverflow.com/questions/45556189/git-the-revocation-function-was-unable-to-check-revocation-for-the-certificate
https://github.com/desktop/desktop/blob/development/docs/known-issues.md#certificate-revocation-check-fails—3326
https://github.com/microsoft/Git-Credential-Manager-for-Windows/issues/646

To delete a database that has been set as Offline, you first have to bring it back online, and then drop it, like so:

EXEC rdsadmin.dbo.rds_set_database_online N'YOUR-DATABASE-HERE'
EXECUTE msdb.dbo.rds_drop_database N'YOUR-DATABASE-HERE'

If you have problems, you may try renaming the database and trying again:

EXEC rdsadmin.dbo.rds_modify_db_name N'YOUR-DATABASE-HERE', N'DeleteMe'

If none of these work, you may not have the correct permissions, which could require resetting your master password — see the AWS documentation for that.

If this error occurs when you are on a Corporate VPN, where browser traffic is captured by a MITM (Man-In-The-Middle), you may need to change any or all of these settings in the Firefox about:config page,
security.tls.hello_downgrade_check = false (this should definitely fix it)
security.tls.version.max = 3 (this should definitely fix it)
security.osclientcerts.autoload = true (this might fix it if it is OS cert-related)

The reason this may occur is because the security device is incapable of providing valid TLS 1.3 HELLO messaging.

Tomcat 9 needs the WorkingDirectory specified in the Systemd service in order to work. Without it, the service will start but never completely load!

To use, put this is in /etc/systemd/system/tomcat.service and install the service as usual.


[Unit]
Description=Apache Tomcat Web Application Container
After=network.target
After=systemd-user-sessions.service
After=network-online.target

[Service]
User=tomcat
Group=tomcat
Type=forking
WorkingDirectory=/app/XXX/bin
ExecStart=/bin/bash /app/XXX/bin/catalina.sh start
ExecStop=/bin/bash /app/XXX/bin/catalina.sh stop

[Install]
WantedBy=multi-user.target

##SetWifiDNS.ps1
##Gets the ip details of "wifi" net adapter, and if has 192.168.0, then it sets 192.168.0.2 to first DNS; 
##otherwise will just reset the DNS!!
##Needs error handling

$currentDir = (Get-Item -Path ".\").FullName+"\"
$wifiPrivateSubnet = "192.168.0."
$wifiPrivateDNS = "192.168.0.2,1.1.1.1,1.0.0.1,8.8.8.8"
$wifiName = "wifi"
$wifiIp = Get-NetIPConfiguration -InterfaceAlias $wifiName | select IPv4Address, InterfaceAlias

# Details for logging
$LogPath = $currentDir
$LogFile = $LogPath+"SetWifiDNS.log"
$LogTime = Get-Date -Format "MM-dd-yyyy_hh-mm-ss"

Write-Host $wifiName $wifiIp.IPv4Address.IPAddress "is checking to match" $wifiPrivateSubnet

$matchesThePrivate = $wifiIp.IPv4Address.IPAddress -Match "192.168.0."

If ($matchesThePrivate) {
#if 192.168.0.2 network:
Get-NetAdapter -Name $wifiName | Set-DnsClientServerAddress -ServerAddresses $wifiPrivateDNS
$msg = $wifiName+" being set to DNS "+$wifiPrivateDNS
}
Else {
#otherwise
Get-NetAdapter -Name $wifiName | Set-DnsClientServerAddress -ResetServerAddresses
$msg = $LogTime+$wifiName+" being reset, logging to "+$LogFile
}

if(-Not [IO.Directory]::Exists($LogPath))
{
    New-Item -ItemType directory -Path $LogPath
}

$LogTime+" - "+$msg | Out-File $LogFile -Append -Force

If you get the error for port 53 being used when starting Pihole, you can disable dnsmasq using these commands:

virsh net-autostart --disable default
virsh net-destroy default

From:

https://forums.unraid.net/topic/48744-support-pihole-for-unraid-spants-repo/?do=findComment&comment=586034