CISSP Study Notes Chapter 21 - Malicious Code and Application Attacks

Chapter 21 covers the topics of assessing vulnerabilities of security designs and vulnerabilities in web based systems, as well as identifying security controls in development environments and applying secure coding guidelines.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 21 - Malicious Code and Application Attacks

My key takeaways and crucial points

Malicious Code

  • Script kiddie - malicious individual who doesn’t understand the technology behind vulnerabilities, but downloads and launches ready to use tools. Often located in countries with weak law enforcement, use malware to steal money and identities.
  • Advanced persistent threat - APT, sophisticated adversaries with advanced technical skills and financial resources. Often military units or intelligence agencies, and have access to zero day exploits.
  • Virus - two main functions, propagation and destruction
    • Propagation techniques
      • Master boot record - attack bootable media
      • File infector - ending in .exe or .com, alter code of executables
      • Macro - leverage scripting functionality of other software
      • Service injection - inject into trusted runtimes like explorer.exe
  • Antivirus mechanisms
    • Signature based detection - database of characteristics that indentify viruses
    • Eradicate virues
    • Quarantine - isolate but not remove
    • Require frequent updates
    • Heuristic - examines the behavior of software to look for bad behavior
  • Multipartite virus - uses more than one propagation technique
  • Stealth virus - tampers witht he OS to fool antivirus into thinking everything is fine
  • Polymorphic virus - modifies their own code from system to system
  • Encrypted virus - similar to polymorphic
  • Hoax - nuisance and wasted resources
  • Logic bomb - lies dormant until triggered by one or more met conditions like time, a program launch, etc.
  • Trojan horse - software that appears benevolent but carries a malicious payload
  • Ransomware - encrypts files and demands payment in exchange for the decryption key
  • Worms - propagates themselves without requiring human intervention
  • Code red worm - summer of 2001, attached unpatched Microsoft IIS servers
  • Stuxnet - mid 2010, attached unprotected administrative shares and used zero day vulnerabilities to specifically attach systems used in the production of material for nuclear weapons
  • Spyware - monitors your actions
  • Adware - shows you advertisements
  • Zero day attack - the necessary delay between discovery of a new type of malicious code and the isuance of patches creates a window for zero day attacks

Password Attacks

  • Passowrd guessing - attackers simply attempt to guess the user’s password
  • Dictionary attacks - tools like John the Ripper take a list of possible passowrds and run an encryption function against them to see which one matches an encrypted password
  • Rainbow table - pre-calculated list of known plaintext and its encrypted value, used to decrease time taken to do dictionary attacks
  • Social engineering
    • Tricking a user into sharing sensitive information like their password
    • Spear phishing - specifically targetted at an individual
    • Whaling - subset of spear phishing sent to high value targets
    • Vishing - phishing over voice communications
    • Dumpster diving - attackers go through trash to look for sensitive informati9on
  • Users should choose strong passwords and keep them a secret

Application Attacks

  • Buffer overflows - devs don’t properly validate user input, and input that is too large can overflow a data structure to affect other data stored in memory.
  • Time of check/time of use - timing vulnerability where a program checks access permissions to far in advance of a resource request
  • Back door - undocumented sequences that allow individuals to bypass normal access restrictions

Web Application Security

  • Cross site scripting - XSS, when web apps contain some kind of reflected input. User input is embedded in the site and can be used to perform malicious activities.
  • Cross site request forgery - XSRF/CSRF, similar to cross site scripting, but exploit a trust relationship. Exploit the trust a remote site has in a user’s system to execute commands on the user’s behalf, often when users are logged into multiple websites at the same time in one browser window.
  • SQL injection - poorly santitized input contains SQL commands which are executed. Combat by using prepared statements, validating user input, and limiting account privileges.

Reconnaissance Attacks

  • Reconnaissance - Attackers find weak points in targets to attack.
  • IP probes - automated tools that attempt to ping addresses in a range.
  • Port scans - probe all the active systems on a network and determine what services are running on each machine.
  • Vulnerability scans - discovery specific vulnerabilities in a system.

Masquerading Attacks

  • Impersonation of someone who does not have the appropriate access permissions.
  • IP spoofing - an attacker reconfigures their system to make it look like they haev an IP address of a trusted system.
  • Session hijacking - an attacker intercepts part of the commmunication between an authorized user and a resource, and then uses a hijacking technique to take it over and assume the identity of the authorized user.
Read More

CISSP Study Notes Chapter 20 - Software Devlopment Security

Chapter 20 talks about understanding the security in the software development lifecycle, identifying and applying security controls in development environments, assessing the effectiveness of software security, assessing security impact of acquired software, and applying secure coding guidelines and standards.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 20 - Software Devlopment Security

My key takeaways and crucial points

Software Development

  • Programming languages
    • Binary code - what computers understand, a series of 1s and 0s called machine language.
    • High level languages like Python, C++, Ruby, R, Java, Visual Basic allow programmers to write instructions that are better approximates for human communication.
    • Compiled languages like C, Java, FORTRAN use a compiler to convert the higher level language into an executable that the computer understands.
    • Interpreted languages like Python, R, JavaScript are not compiled and run in their original versions.
    • Compiled code is generally less prone to third party manipulation, but it is easier to hide malicious code. Compiled code is neither more nor less secure than interpreted.
  • Object oriented programming
    • Each object in the OOP model has methods that correspond to specific actions that can be taken on the object, and inherit methods from their parent class
    • Provides a black-box approach to abstraction
    • Message - a communication to or input of an object
    • Method - internal code that defines the actions an object performs
    • Behavior - result of an object processing a method
    • Class - collection of common methods from a set of objects that defines behavior
    • Instance - objects are instances of a class
    • Inheritance - methods from a class are passed from a parent class to a child class
    • Delegation - forwarding a request by an objec tto another object
    • Polymorphism - the characteristic of an object that allows it to respond to different behaviors to the same message or method because of external condition changes
    • Cohesion - strength of the relationship between purposes of methods within the same class
    • Coupling - level of interaction between objects
  • Assurance - properly implementing security policy through lifecycle of the System (according to the Common Criteria in a government setting)
  • Avoiding and mitigating system failure
    • Input validation - when a user provides a value to be used in a program, make sure it falls within the expected parameters otherwise processing is stopped. Limit checks are when you check that a value falls within an acceptable range. Should always occur on the server side of a transaction.
    • Authentication and session management - require that users authenticate, and developers should seek to integrate apps with organizations existing authentication systems. Session tokens should exire, and cookies should only be transmitted over secure, encrypted channels.
    • Error handling - Errors should not expose sensitive internal information to attackers.
    • Logging - OWASP suggests logging these events: input validation failures, authentication attempts and failures, access control failures, tampering attempts, use of invalid or expired session tokens, exceptions raised by the OS or applications, administrative privilege usage, TLS failures, and cryptographic errors.
  • Fail secure - high level of security
  • Fail open - allows users to bypass failed security controls
  • Software should revert to a fail-secure. This is what a Windows Blue Screen of Death does.
  • Must balance security, functionality, user-friendliness.

Systems Development Lifecycle

  • Conceptual definition - creating the basic concept statement for a system. Not longer than one or two paragraphs, and is agreed on by all interested stakeholders.
  • Functional requirements determination - specific functionalities listed, devs start to think about how the parts of the system should interoperate. Think about input, behavior, output. Stakeholders must agree to this, too, and this document should be often referred to.
  • Control specifications development - continues above design and review phases. Consider access controls, how to maintain confidentiality, provide an audit trail and a detective mechanism for illegitimate activity.
  • Design review
  • Code review walk-through - developers start writing code, walk through it looking for problems.
  • User acceptance testing - actual users validate the system
  • Maintenance and change management - ensure continued operation while requirements and systems change

Lifecycle Models

  • Waterfall - Invented by Winston Royce in 1970. 7 stages, and each stage must be completed before the project moves to the next phase. Modern waterfall allows for moving backwards via “feedback loop”. The first comprehensive attempts to model the software development process.
  • Spiral - 1988 by Barry Boehm, allows for multiple iterations of a waterfall style process. System developers apply the whole waterfall process to the development of several prototypes, and return to the planning stages as demands and requirements change.
  • Agile - emphasis on needs of the customer, quickly developing new functionality. Highest priority is to satisfy the customer through early and continuous delivery, handle changing requirements, prefer short timescales, collaboration.
  • Gantt charts show interrelationships over time between projects and schedules. PERT is a project scheduling tool that relates estimated lowest possible size, most likely size, and highest possible size for each component.
  • Change and configuration management - changes should be centrally logged.
    • Request control - users can request modifications, managers cna conduct cost/benefit analysis, and tasks can be prioritized
    • Change control - developers try to recreate situation encountered by the user, implements an organized framework, and allows devs to test a solution before rolling it out
    • Release control - changes are reviewed and approved, includes acceptance testing
    • Configuration identification
    • Configuration control
    • Configuration status accounting
    • Configuration audit
  • DevOps - seeks to unify software development, quality assurance, and technology operations, rather than allowing them to operate in separate silos. Aims to decrease time required to develop and deploy software changes - you might even deploy several times a day.
  • Application programming interfaces - APIs, allow websites to interact with each other by bypassing traditional webpages and interacting with the underlying service. May have authentication requirements.
  • Software testing
    • Reasonableness check - Ensures the values returned by software match criteria, should be done via separation of duties
    • White box testing - step trhough code line by line
    • Black box testing - from a user’s perspective
    • Gray box testing - combine white and black
    • Static testing - without running the code
    • Dynamic testing - done in a runtime environment
  • Code repositories are a central storage point for developers to collaborate on source code.

Establishing Databases and Data Warehousing

  • Hierarchical data model - logical tree structure, a one to many model
  • Distributed data model - data stored in several databases that are logically connected
  • Relational database - each table looks like a spreadsheet with row/column structure and a one to one mapping relationship
    • Candidate keys - subset of attributes that can uniquely identify a record in the table
    • Primary keys - selected from candidate keys to identify data. Only one primary key per table.
    • Foreign keys - used to enforce relationships between two tables (referrential integrity), and ensure that if one table contains a foreign key, it corresponds to a primary key in another table
  • Database transactions - discrete sets of SQL instructions that will either succeed or fail as a group.
    • Must be committed to the database and cannot be undone when it succeeds.
    • ACID model
      • Atomicity - all or nothing
      • Consistency - consistent with all the database’s rules
      • Isolation - transactions operate separately
      • Durability - once committed, they are preserved
  • Security for multilevel databases
    • These contain information with a variety of different classifications, and must verify labels assigned to owners and provide only the appropriate information
    • Concurrency - edit control is a preventitive control that states information stored in the database is always correct. Locks allow one user to make changes but deny other users access at the same time.
    • Lost updates - when different processes make updates and are unaware of each other
    • Dirty reads - reading a record from a transaction that did not successfully commit
  • Open database connectivity - a proxy between applications and backend database drivers that give programmers greater freedom in creating solutions without having to worry about the underlying database
  • NoSQL - key/value stores that are good for high-speed applications, graph databases, and document stores.

Storing Data and Information

  • Storage types
    • Primary/real memory - resources directly available to the CPU like RAM
    • Secondary storage - inexpensive, nonvolatile storage like hard drives
    • Virtual memory - simulate more primary memory via secondary storage
    • Random access storage - request conteints from any point within the media (RAM and hard drives)
    • Sequential access storage - needs to scan through the entire media, like a tape
    • Volatile storage - loses contents when power is removed (RAM)
    • Nonvolatile storage - does not depend on the presense of power
  • Covert storage channels allow transmission of sensitive data between classification levels through the direct or indirect manipulation of shared storage media.

Understanding Knowledge Based Systems

  • Expert systems - embody accumulated knowledge of experts. Have a knowledge base and an inference engine. Knowledge is codified in a series of “if/then” statements.
  • Inference engines examine information int he knowledge base to arrive at a decision.
  • Machine learning
    • Supervised learning uses labeled data
    • Unsupervised learning uses unlabeled data
  • Neural networks
    • Chains of computational units used to attempt to imitate biolgical reasoning processes of the human mind
    • Extension of machine learning
    • Aka deep learning
    • Delta rule - the ability to learn from experience
Read More

CISSP Study Notes Chapter 19 - Investigations and Ethics

Chapter 19 covers how to understand, adhere to, and promote professional ethics, understanding and supporting investigations, and understanding different investigation types.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 19 - Investigations and Ethics

My key takeaways and crucial points

Investigations

  • Administrative investigations - internal investigations that examine either operational issues or a violation of the organization’s policies. May transition to another type of investigation.
  • Root cause analysis - determine the reason that something occured.
  • Criminal investigations - conducted by law enforcmenet, related to alleged violation of criminal law. Must meet “beyond a reasonable doubt” standard which states there are no other logical conclusions.
  • Civil investigations - do not involve law enforcement, but involves internal employees and outside consultants working for a legal team. Must meet the weaker “preponderance of the evidence” standard that demonstrates the outcome is more likely than not. Not as rigorous.
  • Regulatory investigations - government agencies do these when they think there’s been a violation of administrative law. Violations of industry standards.
  • Electronic discovery
    • Information governance - info is well organized
    • Identifification - locates the information
    • Preservation - protected against alteration or deletion
    • Collection - gatehrs the responsive information centrally
    • Processing - screens the collected information
    • Review - determine what information is responsive to the event
    • Analysis - deeper inspection
    • Production - place info in a format that it may be shared
    • Presentation - show info to witnesses, the court, other parties
  • Admissible evidence - must be relevant to determining a fact, material/related to the case, competent (obtained legally)
  • Evidence types
    • Real - things that may actually be brought into a court of law, aka conclusive evidence
    • Documentary - written items brought to court to prove a fact at hand
    • Testimonial - testimony of a witness, can be direct evidence based on their observations, or expert opinions
    • Hearsay - something that was told to someone outside of court - not admissible
  • Best evidence rule - original documents must be introduced, not copies
  • Parol evidence rule - when an agreement between parties is put into writing, the document is assumed to contain all the terms of the agreement and that no verbal agreement may modify the written agreement
  • Chain of evidence/custody
    • Evidence should be labled with general description, time and date of collection, location evidence was collected from, name of collector, relevant circumstances

Evidence Collection and Forensic Procedures

  • Actions taken to collect should not change evidence
  • Person should be trained to access evidence
  • All activity related to evidence should be fully documented, preserved, available for review
  • Individuals are responsible for all actions taken
  • Preserve the original evidence
  • Network analysis - when incidents take place over a network. Often difficult to reconstruct because networks are volatile, and depend on prior knowledge than an incident is underway or logs.
  • Software analysis - reviews of applications or activity, or review of software code and log files.
  • Hardware/embedded device analysis - includes memory, storage systems

Investigation Process

  • Rules of engagement - define and guide investigative actions
  • Gathering evidence
    • Voluntary surrender - given up willingly, usually when the attacker is not the owner
    • Subpoena - or court order, the court compels someone to provide evidence, but this gives the data owner time to alter the evidence and ruin it
    • Search warrant - used when you must have access to evidence without alerting evidence owner or other personell, the court allows you to seize evidence
  • Deciding whether or not to involve law enforcement is challenging because incidents are more likely to become public, and the Fourth Amendment hampers government investigators in ways that private companies are not.
  • Never conduct investigations on an acutal system that was compromised. Take them offline and use backups.
  • Do not attempt to “hack back” and avenge a crime.
  • Call in expert assistance if needed.
  • Interviewing - gather information from an individual. If information is presented in court, the interview is an interrogation.
  • Attackers often try to santize log files after attacking, so to preserve evidence, logs should be centralized remotely.
  • A final report should be produced by any investigation that details the processes followed, evidence collected, and final results of investigation. Lays the foundation for escalation and legal action.

Major Categories of Computer Crime

  • Computer crime - violation of law that invovles a computer. Any individual who violates your security policies is an attacker.
  • Military and intelligence attacks - restricted ifnormation from law enforcement or military and research sources
  • Business attacks - focus on illegally obtaining confidential information. Aka corporate espionage or industrial espionage. Stealing trade secrets.
  • Financial attacks - carried out to unlawfully obtain money or services. Ex: shoplifting, burglary.
  • Terrorist attacks - to disrupt normal life and instill fear, as opposed to military or intelligence attack which is designed to extract secret information.
  • Grudge attacks - to do damage to an organization or person, usualy out of resentment or to “get back at” an organization. Insider threat is big, these attacks can come from disgruntled employees.
  • Thrill attacks - done for “the fun of it”, usually by “script kiddies”. May also be related to “hacktivism”.

Ethics

  • Rules that govern personal conduct
  • Codes of ethics are not laws, but standards for professional behavior

You should study and review the ISC2 Code of Ethics prior to taking your CISSP exam

Read More

CISSP Study Notes Chapter 18 - Disaster Recovery Planning

Chapter 18 dives into security assessment and testing, and security operations like implementing recovery strategies, DR processes, and testing disaster recovery plans.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 18 - Disaster Recovery Planning

My key takeaways and crucial points

Managing Incident Response

  • Disaster recovery plan - covers situations where tensions are already high and cooler heads may not naturally prevail, should be setup to basically run on autopilot and remove all decision making.
  • Natural disasters
    • Earthquake - shifting of seismic plates
    • Floods - Gradual accumulation of rainwater, or caused by seismic activity (tsunamis)
      • “100 year flood plain” means there is an estimated chance of flooding in any give year of 1/100
    • Storms - Prolonged periods of intense rainfall
    • Fires - including wildfires, and man-made, may be caused by carelessness, faulty electrical wiring, improper fire proteciton practices
      • 1000 building fires in the United States every day
    • Acts of terrorism - General business insurance may not cover against terrorism
    • Bombings/explosions - Including gases from leaks
    • Power outages - protected against by uninterruptible power supply (UPS)
    • Network, utility, and infrastructure failures
      • Which critical systems rely on water, sewers, natural gas, or other utilities?
      • Think about internet connectivity as a utility.
      • Do you consider people a critical business system? People rely on things like water.
    • Hardware/software failures - Hardware components simply wear out, or suffer physical damage.
    • Strikes/picketing - human factor. If a large number of employees walk out at the same time, what would happen to your business?
    • Theft/vandalism - Insurance provides some financial protection

Understand System Resilience and Fault Tolerance

  • Single point of failure - SPOF, any component that can cause an entire system to fail.
  • Fault tolerance - the ability of a system to suffer a fault but continue to operate
  • System resilience - the ability of a system to maintain an acceptable level of service during an adverse event
  • Protecting hard drives
    • RAID-0 - striping
    • RAID-1 - mirroring
    • RAID-5 - striping with parity
    • RAID-10 - aka RAID-1+0, or a stripe of mirrors
  • Protecting servers
    • Failover - If one server fails, another server in a cluster can take over its load
    • Load balancers detect failures and stop sending traffic to the bad server
    • Provide fault tolerance
    • Many IaaS providers offer load balancing that automatically scales resources as needed
  • Protecting power sources
    • Uninterruptible power supply - UPS, battery supplied power for a short period of time that kicks in when power is lost, while a generator starts up to provide backup power
    • Spike - a quick instance of voltage increase
    • Sag - a quick reduction in voltage
    • Surge - a long instance of a spike
    • Brownout - a long instance of a sag
    • Transients - noise on a power line that can come from different sources
    • Line interactive UPS - include variable voltage transformer that helps adjust to over/under voltage events

Trusted Recovery

  • Trusted recovery - After a failure, the system is just as secure as it was before
  • Fail-secure system - defaults to a secure state in the event of a failure, blocking all access
    • Firewalls are normally fail-secure
  • Fail-open system - defaults to an open state, granting access
    • Emergency exit doors are normally fail-open to allow people to escape a hazard in an emergency
  • Manual recovery - After a system failure, it does not fail in a secure state and an administrator needs to perform actions to implement a secured or trusted recovery
  • Automated recovery - The system can perform a trusted recovery to restore itself against at least one type of failure
  • Automated recovery without undue loss - Like automated recovery but it also includes mechanisms to ensure that specific objects are protected to prevent their loss
  • Function recovery - Automatically recover specific functions

Quality of Service

  • Bandwidth - network capacity
  • Latency - time it takes a patcket to travel
  • Jitter - variation in latency
  • Packet loss - requires retransmission
  • Interference - electrical noise, faulty equipment

Recovery Strategy

  • DR plan should be designed so first employees on the scene can immediately start recovery efforts in an organized way, even if official disaster ecovery team isn’t there yet
  • Insurance can reduce the risk of financial losses
  • Business unit and functional priorities
    • Must engineer DR plan to allow highest priority business units to recover first
    • Not all critical functions will be carried out in critical business units
    • Perform a business impact assessment (BIA) - Identify vulnerabilities, develop strategies to minimize risk, provide a report that describes risks. Also identify costs related to failures. Results in a prioritization task. Minimum output of BIA is a simple listing of business units in priority order.
  • Crisis management
    • Individuals in business who are most likely to notice an emergency situation should be trained in DR procedures and know proper notification processes
  • Emergency communications
    • Communicate internally during a disaster so employees know what is expected of them
  • Workgroup recovery
    • The goal is to restore workgroups to the point that they can resume their activities in their usual work locations
    • May need separate recovery facilities for different workgroups
  • Alternate processing sites
    • Cold site - standby facility, no computing facilities preinstalled. Low cost.
    • Hot site - backup facility that is maintained in constant working order, can have replication forced to it or backups taken from primary site to hot site. Higher cost.
    • Warm site - between hot and cold sites, contain equipment needed to establish operation, but not the production data, may take 12 hours to become operational.
    • Mobile site - self contained trailers or other relocated units
    • Service bureau - company that leases computer time
    • Cloud computing - ready-to-run images in cloud providers is usually cost-effective
    • Mutual assistance agreements - MAA, aka reciprocal agreements are rarely implemented but would mean two organizations pledge to assist each other if there’s a disaster by sharing computing facilities. Difficult to enforce, confidentiality concerns, proximity is an issue.
  • Database recovery
    • Electronic vaulting - database backups are moved to a remote site using bult transfers. Done in batch, not realtime.
    • Remote journaling - data transfers are performed in a more expeditious mannar, still in bulk transfer, but done in realtime.
    • Remote mirroring - most advanced, most expensive. Live DB server is maintained at the backup site.

Recovery Plan Development

  • Maintain multiple types of plan documents for different audiences
  • Checklists
  • Emergency response - simple but comprehensive instructions for essential personnel to follow immediately upon recognizing that a disaster is in progress. Most important tasks first.
  • Personnel and communications - List of personnel to contact in the event of a disaster
  • Backups and offsite storage
    • Full backups - complete copy
    • Incremental backups - Files that have been modified since most recent full or incremental backup. Only files with archive bit turned on are duplicated, then those vits are turned off.
    • Differential backups - Files that have been modified since the last full backup. Only files with archive bit turned on, but the bit is left on afterwards.
    • Difference between incremental and differential is the time needed to restored data in an emergency vs time taken to create the backups.
  • Software escrow arrangement - protects a company against failure of a developer to provide adequate support for products or if the developer goes out of business
  • External communications may be performed by public relations officials
  • Logistics refers the problem of moving large numbers of people, equipment, and supplies
  • Recovery is bringing business operations and processes back to a working state, while restoration involves bringing the facility and environment back to a working state. DRP should define criteria for both.

Training, Awareness, and Documentation

  • Should have these elements
    • Orientation training for new employees
    • Initial training on new DR roles
    • Detailed refersher training for DR team members
    • Awareness refershers for all other employees
  • DRP should be treated as exteremely sensitive and provided to individuals on a compartmentalized, need-to-know basis

Testing and Maintenance

  • Read through test - Distribute copies of DR plans and review them
  • Structured walk through - Table-top exercise with the members of the DR team gather, basically role-playing
  • Simulation test - Team members are presented with a scenario and asked to develop a response
  • Parallel test - Relocating personnel to alternate recovery site and implementing site activation procedures
  • Full-interruption test - Actually shut down the primary site and shift them to the backup site
  • DR plans are living documents and need maintenance. They should refer to the organization’s business continuity plan as a template.
Read More

CISSP Study Notes Chapter 17 - Preventing and Responding to Incidents

Chapter 17 goes over conducting logging and monitoring activities, conducting incident management, and operating and maintaining detective and preventative measures.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 17 - Preventing and Responding to Incidents

My key takeaways and crucial points

Managing Incident Response

  • Defining an incident
    • Any event that has a negative effect ont he confidentiality, integrity or availability of an organization’s assets
    • ITIL says it’s any unplanned interruption
    • A computer secuirty incident is a result of an attack, or the result of malicious or itentional actions on the part of users
    • NIST 800-61
  • Incident response steps
    • Incident response is an ongoing activity
    • Does not include a counterattack
      • Usually illegal, often results in escalation
  • Detection
    • Must be able to quickly identify false alarms and user errors
  • Response
    • Computer incident response team (CIRT)
    • Activate the team during a major security incident, but not for minor incidents
    • Computers should not be turned off when containing an incident
      • Important for forensics
  • Mitigation
    • Contain an incident
    • Limit the effect or scope of an incident
    • Address it without worrying about it spreading
  • Reporting
    • Within the org and to groups outside the org
    • Beware of legal requirementse
    • If a data breach exposes PII, the organization must report it
    • Consider reporting the incident to official agencies, they might be able to help
  • Recovery
    • Recover the syustem or return it to a fully functioning state
    • Restoring data
    • Ensure it is configured properly and is at least as secure as it was before the incident
    • Configuration management and chanage management programs are important here
  • Remediation
    • Attempt to identify what allowed it to occur, implement methods to prevent it from happening again
    • Root cause analysis
  • Lessons learned
    • Examine the incident and the response to see if there are any lessons to be learned
    • Improve the response
    • Output of this stage can be fed back to the detection stage
    • Create a report

Implementing Detective and Preventive Measures

  • Basic preventive measures
    • Keep systems and applications up to date
    • Remove or disable unneeded services and protocols
    • Use intrusion detection and prevention syustems
    • Use up to date anti-malware software
    • Use firewalls
    • Implement configuraiton and system management processes

Understanding Attacks

  • Botnets
    • Bot herder
      • A criminal who controls computers in the botnet via one or more command and control servers
    • Defense in depth
    • Educating users
  • Denial of service attacks
    • DoS attacks
    • Prevent a system from processing or responding to legitimate traffic or requests for resources and objects
    • Distributed denial of service attacks occur when multiple systems attack a single system at the same time
    • Distributed reflective denial of service attack doesn’t attack the victim directly but manipulates traffic or a service so the attacks are reflected back to the victim from other sources
  • SYN flood attack
    • Disrupts the standard 3 way handshake used by TCP
    • Consume available memory and processing power
    • SYN cookies can block this attack
    • Reduce the amoun tof time a server will wait for an ACK
  • Smurf attacks
    • Another type of flood attack, but floods with ICMP echo packets instead of TCP SYN
  • Fraggle attacks
    • Similar to smurf, but instead of ICMP, useds UDP packets over ports 7 and 19
  • Ping flood
    • Floods a victim with ping requests
  • Ping of death
    • Oversized ping packet
    • Buffer overflow error
  • Teardrop
    • Attacker fragments traffic in a way that a system is unable to put it back together
  • Land attacks
    • Attacker sends spoofed SYN packets to a victim using the victim’s IP address as botht he soure and destination IP
  • Zero day exploit
    • Vulnerabilities that are unknown to others
    • The attacker is the only one aware of the vulnerability, before the vendor makes a patch
    • The gap between when the vendor releases the patch and when administrators apply it is a dangerous zone
    • Honeypots and padded cells
  • Malicious code
    • Any script or program that performs an unwanted, unauthorized or unknown activity on a computer system
    • Drive by downloads
      • Code is installed on a user’s system without the user’s knowledge
  • Man in the middle attacks
    • MITM
    • Malicious users gain a position logically between the two endpoints
    • Copying or sniffing the traffic between parties
    • A store and forward or proxy mechanism
    • Intrusion detection systems cannot usually detect MITM or hijack attacks
    • VPNs
  • Sabatoge
    • A criminal act of destruciton or disruption committed against an organization by an employee
    • Employee terminations should be handled swiftly
      • Account access should be disabled ASAP
  • Espionage
    • The malicious act of gathering classified information about an organization
    • Disclosing or selling ifnormation to a competitor
    • Mole/plant - an employee with a secret allegience to another organization whose goal is to steal information
    • Screen and track employees effectively

Intrusion Detection and Prevention Systems

  • Intrusion
    • Attacker can bypass security mechanisms
  • Intrusion detection
    • Monitors recorded information
  • Intrusion prevention system
    • IPS
    • Can take steps to stop/prevent intrusions
    • NIST 800-96
  • Knowledge/Behavior-based detection
    • Knowledge based
      • aka signature based
      • Database of known attacks
    • Behavior based
      • aka statistical/anomaly, heuristics
      • Creates a baseline of normal activities and events on a system, detects abnormal activity
      • aka an Expert System
  • SIEM systems
    • Security ifnromation and event management system
    • Advanced analytic tools
    • Passive response
      • Notifies administrators
    • Active response
      • Can modify the environment
        • Modifies ACLs, addresses, disable communications over specific segments, etc.
  • Host and Network based IDSs
    • Host based
      • Monitors a single computer
      • Can detect anomalies on the host that a network IDS cannot
      • Requires admin attention on each system
    • Network based
      • Evaluates network activity
      • Can monitor a large network to collect data at key locations
      • Switches are often used as a preventive measure against rogue sniffers
      • Very little effect on network performance
      • Usually able to detect initation of attack, not always about the success of an attack
  • Intrusion prevention systems
    • Placed in line with traffic
    • Active IDS that is not placed in line can check the activity only after it has reached the target

Specific Preventive Measures

  • Honeypot
    • Individual computers created as a trap for intruders
    • Honeynet, a network of honeypots
    • Do not host any data of real value
    • Opportunity to observe an attacker’s activity
    • Enticement vs entrapment
      • Intruder must discover it through no outward effort of the honeypot owner
    • Psuedo flaws
      • False vulnerabilities
    • Padded cells
      • Look and feel like an actual network but attackers are unable to perform any malicious activities
      • Offer fake data
  • Warning banners
    • Inform users and iuntruders about security policy guidelines
    • Legally bind uesrs
    • No tresassing signs
  • Anti-malware
    • Signature files and heuristic capabilities must be kept up to date
    • Firewalls with content-filtering capabilities
    • Install only one anti-malware applicatoin on any system
    • Least privilege helps
    • Educating users
  • Whitelisting and blacklisting
    • Should be called allowlisting and denylisting, but these are the terms CISSP uses
    • Whitelisting identifies a list of applicaitons that are authorized to run
    • Blacklisting is a list of applications that are blocked
  • Firewalls
    • Filtering traffic based on IP address, port, protocols
    • Second generation firewalls add additional filtering capabilities based on application requirements
    • Next generation firewalls function as a unified threat management device and have even more filtering, like packet filtering and stateful inspection, as well as packet inspeciton
  • Sandboxing
    • Prevents the application from interacting with other applciations
    • Virtualization techniques
  • Third party security services
    • SaaS
      • Software as a service
  • Penetration testing
    • Mimics an actual attack to attempt to identify which techniques attackers cna use to circumvent security
    • NIST 800-115
    • Include a vulnerability scan/assessment
    • Attempt to exploit weaknesses
    • Determine how well a system can tolerate attack
    • Identify employees’ ability to detect and respond to attacks in real time
    • Identify additional controls that can be implemented to reduce risk
    • Pentesting risks
      • Some methods can cause outages
      • Should stop before doing actual damage
      • Should try to perform pentesting in a test system
    • Must always have permission in writing with the risks spelled out
    • Black box testing
      • Zero knowledge
    • White box testing
      • Full knowledge
    • Gray box testing
      • Partial knowledge
    • Social engineering techniques are often used
    • Must protect pentesting reports because they describe attacks against the system
    • Reports must make a recommendation
    • AKA ethical hacking

Logging, Monitoring, and Auditing

  • Logging
    • Recording information about events to a file or database
  • Log types
    • Security logs - access to resources
    • System logs - system events
    • Application logs - specific applications
    • Firewall logs
    • Proxy logs - include details such as what sites specific users visit and how much time they spend on those sites
    • Change logs
      • Part of a disaster recovery program
  • Protecting log data
    • Use logs to recreate events leading up to and during an incident only if the logs haven’t been modified
    • Store copies on a central system like a SIEM
    • FIPS 200
  • Audit trails
    • Records created when information about events is stored in one or more databases or log files
    • Passive form of detective security control
    • Also serve as a deterrent
    • Essential as evidence in the prosecution of criminals
  • Monitoring and accountability
    • Users claim an identity and must prove their identity by authenticating
    • Audit trails record their activity
    • Users who are aware that lgos are recording are less likely to try to circumvent security controls or perform unauthorized activities
  • Monitoring
    • The process of reviewing logs looking for something specific
    • Continuous process
    • Log analysis
      • Detailed form of monitoring, logs are analyzed for trends and patterns
    • Many orgs use a centralized application for monitoring
    • SIEMs may include a correlation engine to help combine multiple log sources into meaningful data
  • Sampling
    • Extracting elements from a large collection to construct a meaningful representation of the whole
  • Clipping levels
    • Predefine dthreshold for the event, ignoring events until they reach the level
  • Keystroke monitoring
    • Act of recording keystrokes a user performs on a keyboard
    • Often compared to wiretapping
  • Traffic and trend analysis
    • Examine the flow of packets rather than the contents
  • Egress monitoring
    • Watching outgoing traffic to prevent data exfiltration
  • Data loss prevention
    • Detect and block data exfiltration attempts
    • Network based scan all outgoing data
    • Endpoint based scan files stored on a system
    • Deep level examinations of data in files
  • Steganography
    • The practice of embedding a message within a file
  • Watermarking
    • The practice of embedding an image or pattern in paper that isn’t readily perceivable, often to thwart counterfeiting attempts

Auditing to Assess Effectiveness

  • Auditing
    • A methodical examination of an environment
    • Use audit logs and monitoring tools to track activity
    • Auditing - Inspection or evaluation
  • Auditors
    • Test and verify that processes and procedures are in place to implement security policies or regulations
  • Inspection audits
    • Clearly define and adhere to the frequence of audit reviews
  • Access review audits
    • Ensure that object access and account management practices suppor the current security policy
    • Ensure that accounts are disabled and deleted in accordance with best practices and security policies
    • Typical termination process:
      • At least one witness is present during exit interview
      • Account access terminated during interview
      • Employee ID badges and physical credentials are collected
      • Employee escorted off premises immediately
  • User entitlement audits
    • Refers to the prvileges gratned to users
    • Enforce least privilege principle
  • Audits of privileged groups
    • High level adminstrator groups
    • Dual administrator accounts
      • Separation of privileges (normal account and a privileged account)
  • Security audits and reviews
    • Patch management - patches are evaluated ASAP, properly deployed through a testing process
    • Vulnerability management - compliance with established guidelines, scans and assessments
    • Configuration management - Use tools to check specific configurations of systems and identify when a change has occured
    • Change management - Changes are implemented in accordance to change management policy
  • Reporting audit results
    • Report needs purpose, scope and results
  • Protecting audit results
    • Contain sensitive information, need a classification label
    • Sometimes create a seaprate audit report with limited data for separate distribution
    • When distributing, get signed confirmation
  • External auditors
    • Some laws require this
    • Provide a level of objectivity that interal audits can’t
    • Interim reports - written or verbal given to the org about observerations that demand immediate attention
Read More

CISSP Study Notes Chapter 16 - Managing Security Operations

Chapter 16 goes over securely provisioning resources, understanding and applying foundational security operations concepts, applying resource protection techniques, implementing and supporting patch and vulnerability management, understanding and participating in change management, and addressing personnel safety and security concerns.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 16: Managing Security Operations

My key takeaways and crucial points

Applying Security Operations Concepts

  • Due care and due diligence refers to taking reasonable care to protect the assets of an organization on an ongoing basis
  • Need to know
    • Focuses on permissions and the ability to access information
    • Rights
      • Refers to the ability to take actions
    • Grant users access only to data or resources they need to perform assigned work tasks
  • Least privilege
    • Granted only the privileges necessary to perform assigned work tasks and no more
    • Entitlement
      • The amount of privileges granted to users
    • Aggregation
      • The amount of privileges that users collect over time
    • Transitive trust
      • A trust relationship between two security domains
  • Separation of privilege
    • No single person has total control
    • Collusion
      • An agreement by two or more persons to perform some unauthorized activities
    • Helps reduce fraud
    • Builds on least privilege
    • Segregation of duties is specifically required by SOX
  • Two person control
    • Operations that require two keys
    • Ensures peer review, reduces likelihood of fraud
    • Split knowledge is where information or privilege is divided among multiple users
  • Job rotation
    • Encourages peer review, reduces fraud, enables cross training
    • Acts as both a deterrent and a detection mechanism
  • Mandatory vacations
    • Peer review, helps detect fraud and collusion
    • Acts as a deterrent and a detection mechanism

Privileged Account Management

  • Special privilege operations
    • Activities that require special access or elevated rights and permissions to perform
    • Sensitive job tasks
  • Monitoring usage of special privileges, so organizations can deter employees from misusing privileges and detect actions
  • Perform access review audits

Managing the Information Lifecycle

  • Creation/capture
    • Data is created by users, downloading files, etc.
  • Classification
    • Should be done asap
    • Ensure that sensitive data is identified and handled appropriately based on its classification
    • Once data is classified, it can be marked and handled correctly
      • Easily recognize data’s value
  • Storage
    • Periodically back up
    • Encrypted
    • Physical security
  • Usage
    • Any time data is in use or in transit over a network
    • Used in an unencrypted format
  • Archive
    • Comply with laws/regulations regarding data retention
    • Ensure data is available
  • Destruction/purging
    • NIST 800-88r1

Service Level Agreements

  • Defines performance expectations and penalties
  • Sometimes have memorandums of understanding
    • and/or an interconnection security agreement
    • Two entities work together toward a common goal
  • Can specify technical requirements

Addressing Personnel Safety and Security

  • Always possible to replace equipment and data, can’t replace people
  • Human safety is ALWAYS top priority
  • Duress
    • A simple duress system is just a panic button that sends a distress call
    • More common when working alone
    • Code words or phrases
  • Travel
    • Verify a person’s identity before opening a hotel door
    • Sensitive data is ideally not brought on the road, but if it is it needs to be encrypted
    • Malware/monitoring devices
      • Maintain physical control of all devices
      • Do not bring personal devices
    • Free WiFi
      • Vulnerable to man in the middle attacks
  • Emergency management
    • Natural or man-made disasters
    • Locate sensitive physical assets toward the center of the building

Managing Virtual Assets

  • Reduction in overall operating costs when going virtual
  • Hypervisor
    • Essential virtualization software

Managing Cloud Based Assets

  • SaaS
    • Software as a service
    • Fully functional applications accessed via web browser, usually
  • PaaS
    • Platform as a service
    • Computing platform, including hardware, an OS, and applications
  • IaaS
    • Infrastructure as a service
    • Basic computing resources
  • NIST 800-145
  • Public cloud
    • Available for any consumers
  • Private cloud
    • Single organization
  • Community cloud
    • Two or more organizations
  • Hybrid cloud
    • Combination of two or more clouds

Media Management

  • Includes any hard copy of data
  • When media is marked, handled and stored properly, it helps prevent unauthorized disclosure (loss of confidentiality), unauthorized modifications (loss of integrity), and unauthorized destruction (loss of availability)
  • Tape media
    • Keep at least two copies of backups
    • At least one offsite
  • Mobile devices
    • MDM system monitors and manages devices, ensures they are up to date
    • Encryption protects data if phone is lost or stolen
  • Managing media lifecycle
    • Once backup media has reached it’s MTTF, it should be destroyed
    • Degaussing does not remove data from an SSD

Managing Configuration

  • Baselining
    • Baselines are starting points
    • When systems are deployed in a secure state with a secure baseline, they are more likely to stay secure
  • Using images for baselining
    1. Create the image
    2. Capture the image
    3. Deploy the image
  • Ensure that desired security settings are always configured correctly

Managing Change

  • Change management
    • Reducing unanticipated outages by unauthorized changes
    • Primary goal is to ensure that changes do not cause outages
  • Unauthorized changes directly affect availability
  • Security impact analysis
    1. Request the change, identify desired changes
    2. Review the change
    3. Approve/reject the change
    4. Test the change
    5. Schedule and implement the change when it will have the least impact
    6. Document the change
  • Emergency changes can still occur, but the process still needs to document the changes
  • Versioning
    • Labeling or numbering system that differentiates between different software sets and configurations across multiple machines or at different points in time on a single machine
  • Configuration documents
    • Who is responsible
    • Purpose of the change
    • List all changes to the baseline

Managing Patches and Reducing Vulnerabilities

  • Systems to manage
    • Any computing device with an OS
    • Network infrastructure systems
    • Embedded systems
  • Patch management
    • Patch
      • Any type of code written to correct a bug or vulnerability or improve the performance of existing software
    • Evaluate patches
      • Determine if they apply to your systems
    • Test patches
      • Test on an isolated nonproduction system
      • Determine unwanted side effects
    • Approve the patches
      • Change management
    • Deploy the patches
      • Automated methods
    • Verify that patches are deployed
      • Regularly test and audit systems
  • Vulnerability management
    • Identifying vulnerabilities, evaluating them, mitigating risks
    • Vulnerability scans
      • Test systems and networks for known security issues
      • Nessus by Tenable Network Security
      • Generate reports
    • Vulnerability assessments
      • Scan reports from past year to determine if the organization is addressing vulnerabilities
      • “Why hasn’t this been mitigated?”
      • Part of a risk analysis or assessment
    • Common vulnerabilities and exposures
      • MITRE maintains the CVE database: cve.mitre.org
      • MITRE is not an acronym, funded by US government to maintain the database
Read More

CISSP Study Notes Chapter 15 - Security Assessment and Testing

Chapter 15 is a hefty chapter which covers designing and validating assessment, test, and audit strategies, conducting security control testing, collecting security process data, and then analyzing test output, and conducting security audits.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 15: Security Assessment and Testing

My key takeaways and crucial points

Security Testing

  • Security tests
    • Verify that a control is functioning properly
  • Frequent automated tests supplemented by infrequent manual tests are recommended
    • Review the results of those tests to ensure that each test was successful

Security Assessments

  • Security assessment
    • Comprehensive reviews of the security of a system applications, or other tested environment
  • Information security professional performs a risk assessment
  • Assessment report addressed to management

Security Audits

  • Security audit
    • Use many of the same techniques for assessments, but must be performed by independent auditors
  • Assessments are internal use only
  • Audits are done for the purpose of demonstrating the effectiveness of controls to a third party
  • Internal audits
    • Intended for internal audiences
  • External audits
    • Performed by outside auditing firms
  • Third party audits
    • Conducted by, or on behalf of, another organization
    • Type I
      • Controls provided by audited organization as well as auditor opinion based on description
    • Type II
      • Minimum six month period and also include an opinion from the auditor
      • Considered more reliable
  • Auditing standards
    • COBIT
      • Describes common requirements orgs should have in place surrounding information systems
    • ISO 27001
      • A standard approach for setting up an information security management protocol
      • ISO 27002 goes into more detail

Describing Vulnerabilities

  • Common vulnerabilities and exposures (CVEs)
    • A naming system for vulnerabilities
  • Common vulnerability scoring system (CVSS)
    • A scoring system for severity
  • Common platform enumeration (CPE)
    • A naming system for configuration issues
  • Extensible configuration checklist description format (XCCDF)
    • Language for security checklists
  • Open vulnerability and assessment language (OVAL)
    • Language for security testing procedures

Vulnerability Scans

The instructor for my bootcamp told us that this is a heavily tested section, and trips up a ton of test takers

  • Vulnerability scans
    • Automatically probe systems
  • Network discovery scans
    • NMAP - a network scanning tool
    • TCP SYN scanning
      • Single packets sent with the SYN flag set
    • TCP connect scanning
      • Opens a full connection to the remote system
    • TCP ACK scanning
      • Send a packet with ACK set, indicating it’s part of an open connection
      • Helps determine firewall rules
    • Xmas scanning
      • FIN, PSH, URG flags are set on packets sent to systems
  • Port statuses
    • Open - there is an application that is actively accepting connections
    • Closed - The firewall is allowing access, but there is no application accepting connections
    • Filtered - Unable to determine if a port is open or closed because of a firewall
  • Network vulnerability scanning
    • Deeper than a discovery scan
    • Tools contain databases of thousands of known vulnerabilities, and tests that can be performed to identify whether a system is susceptible to each vulnerability
    • False positives and false negatives may occur
    • By default, vulnerability scanners run unauthenticated scans
  • TCP ports
Service Port
FTP 20-21
SSH 22
Telnet 23
SMTP 25
DNS 53
HTTP 80
POP3 110
NTP 123
Windows file sharing 135, 137-139 (NETBIOS, WINS), 445
HTTPS 443
LPR/LPD 515
Microsoft SQL Server 1433/1434
Oracle 1521
H.323 1720
PPTP 1723
RDP 3389
HP JetDirect Printing 9100
  • Nessus, Qualys, Rapid7’s NeXpose, OpenVAS are all vulnerability scanners
  • Aircrack is used to scan wireless networks
  • Web vulnerability scanning
    • Structured Query Language (SQL) injection, leveraging poor input validation/sanitization
    • Web vulnerability scanners scour web applications for known vulnerabilities
    • Nessus does this, too, also Acunetix, Nikto, Wapiti, Burp Suite
    • Scan all applications when you begin performing scanning for the first time
    • Scan new applications when moving into production
    • Scan before code changes go to production
    • Scan on a recurring basis
  • Vulnerability management workflow
    1. Detection - Identification of a vulnerability
    2. Validation - Confirm the vulnerability is not a false positive
    3. Remediation - Patch, change configurations, implement a workaround
  • Penetration testing
    • Actually attempting to exploit systems, not just scan them
    • Done by trained security professionals
    • Process
      1. Planning - agree on scope, rules of engagement
      2. Information gathering and discovery - tools collect information, reconnaissance
      3. Vulnerability scanning - probes for system weaknesses
      4. Exploitation - use automated and manual exploitation tools to defeat system security
      5. Reporting - results of the penetration test, make recommendations
    • Metaspoit is a common tool
    • Types of penetration tests
      1. White box - attackers have detailed information about the target systems
      2. Gray box - attackers have partial knowledge about target systems
      3. Black box - attackers are not provided with any information
        • They should be done in this order

Testing Your Software

  • Applications often have privileged access
  • Apps often handle sensitive information
  • They often rely on databases
  • Code review
    • AKA peer review
    • Approval of an application’s move into production
    • Fagan inspection
      1. Planning
      2. Overview
      3. Preparation
      4. Inspection
      5. Rework
      6. Follow up
  • Static testing
    • Done without running it, but rather analyzing source code or compiled app
  • Dynamic testing
    • Done in a runtime environment
    • Testers often do not have access to underlying source code
    • Synthetic transactions are scripted transactions with an application with known expected results
  • Fuzz testing
    • Different types of input are given to software to test it’s limits and find previously undetected flaws
    • Mutation “dumb” fuzzing
      • Takes previous input values and mutates them to create fuzzed input
    • Generational “intelligent” fuzzing
      • Data models used to create new fuzzed input based on understanding of data types used by the system
    • zuff - a tool that performs fuzzing
  • Interface testing
    • Different parts of a complex app that must function together are tested
  • Misuse case testing
    • Enumerate the known misuse cases
    • How can software be abused?
  • Test coverage analysis
    • Estimate the degree of testing conducted
      • Test coverage = number of use cases tested / total number of use cases
    • Branch coverage
      • Has every if been executed under all if and else conditions?
    • Conditional coverage
      • Has every logical test in the code been executed under all sets of inputs?
    • Function coverage
      • Has every function in the code been called and returned results?
    • Loop coverage
      • Has every loop in the code been executed under conditions that cause code execution multiple times, once, and not at all?
    • Statement coverage
      • Has every line of code been executed during the test?
  • Website monitoring
    • Passive monitoring
      • Analyze actual network traffic
      • Real user monitoring reassembles the activity of individual users
    • Synthetic monitoring
      • AKA Active monitoring
      • Performing artificial transactions

Implementing Security Management Processes

  • Log reviews
    • Logging systems should use Network Time Protocol (NTP) to ensure clock synchronization
    • Periodically review logs
  • Account management
    • Ensure users only retain authorized permissions and that unauthorized modifications do not occur
    • Example process
      1. Provide a list of users with privileged access
      2. Ask the privilege approval authority to provide a list of authorized users
      3. Compare the two lists
    • Lots of other checks, like terminated users
    • Check paper trails
  • Backup verification
  • Key performance and risk indicators
    • Monitor key performance and risk indicators
    • Number of open vulnerabilities
    • Time to resolve vulnerabilities
    • Vulnerability/defect recurrence
    • Number of compromised accounts
    • Number of software flaws detected in pre-production scanning
    • Repeat audit findings
    • User attempts to visit known malicious sites
    • Lots more, come up with your own depending on what’s important to your org
Read More

CISSP Study Notes Chapter 14 - Controlling and Monitoring Access

Chapter 14 is about identity and access management (IAM), and discusses all kinds of different access control: role based, rule based, mandatory,discretionary, and attribute based.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 14: Controlling and Monitoring Access

My key takeaways and crucial points

Comparing Permissions, Rights, and Privileges

  • Permissions
    • The access granted for an object and what you can do with it
  • Rights
    • The ability to take an action on an object
  • Privileges
    • The combination of rights and permissions

Understanding Authorization Mechanisms

  • Implicit deny
    • Access to an object is denied unless it has been explicitly granted
  • Access control matrix
    • A table that includes subjects, objects, and assigned privileges
  • Capability tables
    • Like an ACL, but focused on subjects
  • Constrained interface
    • Restricted interfaces that control what users can do or see based on their privileges
  • Content depended control
    • Restrict access to data based on the content within an object
    • Ex: A database view
    • “What” data is being accessed
  • Context depended control
    • Require specific activity before granting access
    • Ex: Date and time bound access
    • “How” you’re accessing data
  • Need to know
    • Granted access only to what you need to know to perform your job
  • Least privilege
    • Subjects are granted only the privileges they need to perform their work tasks and job functions
    • Will also include rights to take action on a system
  • Separation of duties and responsibilities
    • Sensitive functions are split into tasks performed by two or more employees

Defining Requirements with a Security Policy

  • Security policy
    • A document that defines the security requirements for an organization
  • Senior leadership approves the security policy

Implementing Defense in Depth

  • Defense in depth
    • Multiple layers or levels of access controls to provide layered security
  • Key components
    • Security policy
    • Personnel, training
    • Combination of administrative, technical, and physical access controls

Summarizing Access Control Models

  • Discretionary access control
    • Every object has an owner who can grant or deny access to any other subjects
  • Role based access control
    • User accounts are placed in roles and administrators assign privileges to the roles
  • Rule based access control
    • Global rules apply to all subjects
    • AKA restrictions/filters
  • Attribute based access control
    • Uses rules that can include multiple attributes
    • More flexible than rule based access control
    • Plain language statements
  • Mandatory access control
    • Labels applied to both subjects and objects

Discretionary Access Control

  • Allows owner/creator/data custodia of an object to control and define access to the object
  • Using access control lists (ACLs)

Nondiscretionary Access Control

  • Administrators centrally administer access controls and can make changes that affect the entire environment

Role Based Access Control

  • AKA task-based access control
  • AKA RBAC
  • Privilege creep
    • Users accrue privileges over time as their roles and access needs change
  • Administrators identify roles/groups by work function
  • Useful in dynamic environments with frequent personnel changes

Rule Based Access Control

  • Rules, restrictions, filters determine what can and cannot occur on a system
  • Global rules apply to all subjects
  • RBAC refers to ROLE based access control
  • Firewalls include a set of rules within an ACL
  • Implicit deny

Attribute Based Access Control

  • ABAC
  • Uses policies that include multiple attributes for rules
  • Can be any characteristic of users, network, devices

Mandatory Access Control

  • MAC
  • Uses classification labels
  • Security domain
    • A collection of subjects and objects that share a common security policy
  • Often referred to as a lattice-based model
  • Compartmentalization enforces need to know principle
  • Hierarchical environment
    • Ordered structure from low security to medium security to high security
    • Classification labels
  • Compartmentalized environment
    • No relationship between one security domain and another
  • Hybrid environment
    • Combines both hierarchical and compartmentalized concepts

Understanding Access Control Attacks

  • Risk elements
    • Threat
      • A potential occurrence that can result in an undesirable outcome
    • Vulnerability
      • Any type of weakness
    • Risk management
      • Attempting to reduce or eliminate vulnerabilities, or reduce the impact of potential threats by implementing controls or countermeasures
      • Process
        • Identify assets
          • Asset valuation - identifying the actual value of an asset so you may prioritize them
        • Identify threats
          • Threat modeling - identifying, understanding and categorizing potential threats
        • Identify vulnerabilities
  • Advanced persistent threats
    • APTs
    • Attackers who are working together, highly motivated, skilled, and patient
    • Advanced knowledge
  • Threat modeling approaches
    • Focused on assets
      • Identify threats to valuable assets
    • Focused on attackers
      • Based on attackers goals
    • Focused on software
      • Based on potential threats against software
  • Identifying vulnerabilities
    • Identifying strengths and weaknesses of different access control mechanisms

Common Access Control Attacks

  • Access aggregation attacks
    • Collecting multiple pieces of non-sensitive information and combining them to learn sensitive information
    • Reconnaissance
  • Password attacks
    • Passwords are the weakest form of authentication
    • Dictionary attack
      • Attempt to discover passwords by using every possible password in a predefined database or list of common or expected passwords
    • Brute force attack
      • Attempt to discover passwords for accounts by systematically attempting all possible combinations of letters, numbers and symbols
      • Hybrid attack attempts a dictionary attack and then a brute force
    • Birthday attack
      • Focuses on finding collisions
      • The birthday paradox states that if there are 23 people in a room, there is a 50% chance that two of them will have the same birthday (month and day only, not year)
    • Rainbow table attack
      • Rainbow tables are large databases of precomputed hashes
      • Salt passwords to reduce effectiveness of rainbow tables
        • Salt is random bits added to a password before hashing it, stored in the same database holding the hashed password
        • Pepper is a large constant number stored elsewhere
    • Sniffer attacks
      • Capture packets sent over a network to analyze them
      • AKA snooping attack
      • Wireshark is a popular tool for this
      • Make sure you
        • Encrypt all sensitive data
        • Use one time passwords
        • Implement physical security
        • Monitor the network for signatures from sniffers
  • Spoofing attacks
    • AKA masquerading
    • Pretending to be something else
    • Email spoofing
      • Spoofing the email address in the from field of an email
      • Phishing
    • Phone number spoofing
      • Caller ID
      • VoIP
  • Social engineering attacks
    • Gaining the trust of someone using deceit to get them to betray organizational security
    • Shoulder surfing is also considered social engineering
      • Looking at someone’s screen while they access information
      • Use screen filters
    • Phishing
      • Getting users to open an attachment, click a link, or reply with personal information
      • Drive by downloads
        • Malware that installs itself without the user’s knowledge when the user visits a website
      • Spear phishing
        • Phishing where specific users or groups are targeted
      • Whaling
        • Senior or high level executives are targeted
      • Vishing
        • Phishing with a phone system or VoIP
  • Smartcard attacks
    • Side channel attack
      • Passive, non-invasive attack that observes how the device functions

Summary of Protection Methods

  • Control physical access to systems
    • If attackers can gain physical access to a server, they can steal it and do anything to it
  • Control electronic access to files
  • Create a strong password policy
  • Hash and salt passwords
  • Use password masking, never display cleartext passwords
  • Deploy multifactor authentication
  • Use account lockout controls
    • Lock an account after the incorrect password is entered a predefined number of times
    • Implement extensive logging
  • Use last logon notification
    • Display information about the last time an account was successfully logged into
  • Educate users about security
Read More

CISSP Study Notes Chapter 13 - Managing Identity and Authentication

Chapter 13 is an important chapter that gets into controlling physical and logical access to assets, managing identification and authentication of people, devices and services, integrating identity as a third-party service, and managing the identity and access provisioning lifecycle.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 13: Managing Identity and Authentication

My key takeaways and crucial points

Comparing Subjects and Objects

  • Subjects are active entities that access a passive object
  • Objects are passive entities that provide information to active subjects

The CIA Triad and Access Controls

  • Confidentiality
    • When unauthorized entities can access systems or data, it results in a loss of confidentiality
  • Integrity
    • Unauthorized changes
  • Availability
    • Data should be available to users and other subjects when they are needed

Types of Access Control

  • Access control includes the following overall steps
    1. Identify and authenticate users or other subjects attempting to access resources
    2. Determine whether the access is authorized
    3. Grant or restrict access based on the subject’s identity
    4. Monitor and record access attempts
  • Preventive access control
    • Attempts to thwart or stop unwanted or unauthorized activity from occurring
  • Detective access control
    • Attempts to discover or detect unwanted or unauthorized activity
  • Corrective access control
    • Modifies the environment to return systems to normal after an unwanted or unauthorized activity has occurred
  • Deterrent access control
    • Discourages security policy violations
  • Recovery access control
    • Repair or restore resources, functions, and capabilities after a security policy violation
  • Directive access control
    • Direct, confine, or control the actions of subjects to force or encourage compliance with security policies
  • Compensating access control
    • Provides an alternative when it isn’t possible to use a primary control
    • Increase the effectiveness of a primary control
  • Administrative access control
    • Policies and procedures
  • Logical/technical controls
    • Hardware or software mechanisms
  • Physical controls
    • Items you can physically touch

Comparing Identification and Authentication

  • Identification
    • The process of a subject claiming, or professing, an identity
  • Authentication
    • Verifying the identity of the subject by comparing one or more factors against a database of valid identities
  • Registration
    • When a user is first given an identity
  • Authorization
    • Access to objects based on proven identities
    • Indicates who is trusted to perform specific operations
  • Accountability
    • Auditing is implemented
    • The process of tracking and recording subject activities within logs
    • Relies on effective identification and authentication, but does not require effective authorization

Authentication Factors

  • Type 1
    • Something you know
    • Ex: password, PIN
  • Type 2
    • Something you have
    • Ex: token, phone, smartcard
  • Type 3
    • Something you are
    • Ex: Fingerprint, retina scan
  • Context-aware authentication
    • Based on location, time of day, mobile device
    • May implement a geo-fence so resources are only available on some devices if the device is in a specific place
    • Detecting impossible travel
  • Passwords
    • Type 1
    • A static password stays the same for a length of time
    • Weakest form of authentication
    • Creating strong passwords
      • Max age
      • Complexity
      • Length
      • History
    • Passphrases are more effective
      • Longer strings of characters made up of multiple words
    • NIST 800-63B suggests comparing a user’s password against a list of commonly known simple passwords and rejecting the commonly known passwords
  • Cognitive passwords
    • Challenge questions
    • Ex: What is your birth date? What is the name of your first pet?
    • Answers are commonly available on the internet
  • Smartcards
    • Type 2
    • Certificates are used for asymmetric crypto like encrypting data or signing email
  • Tokens
    • Type 2
    • Password-generating devices
    • Synchronous dynamic passwords
      • Time based, synchronized with an authentication server
    • Asynchronous dynamic passwords
      • Does not use a clock
      • Based on an algorithm and an incrementing counter
  • Biometrics
    • Type 3
    • Using a biometric factor instead of a username requires a one-to-many search
      • Capturing a single image of a person and searching a database of many people looking for a match
    • Using a biometric factor as an authentication technique requires a one-to-one match
      • The user claims an identity and the biometric factor is checked to see if the person matches the claimed identity
    • Retina scans
      • Pattern of blood vessels at the back of the eye
      • Most accurate
    • Iris scan
      • Second-most accurate
    • Palm scans
      • Measure vein patterns in the palm
    • Hand geometry
      • Physical dimensions of the hand
    • Signature dynamics
      • Writes a string of characters
    • Keystroke patterns
      • How the subject uses a keyboard
  • Biometric Factor Error Ratings
    • False rejection rates
      • Type I error
    • False acceptance rates
      • Type II error
    • Crossover error rate (CER)
      • Where false rejection and false acceptance percentages are equal
      • Related to sensitivity of scan/detection
  • Biometric registration
    • Enrollment
    • A subject’s biometric factor is sampled and stored in a database
    • Known as a reference template/profile

Multifactor Authentication

  • Must use multiple types/factors such as “something you know” and “something you have”
  • Ex: Typing in a password (something you know), and then entering a synchronous dynamic password from a token (something you have)

Device Authentication

  • Users can register their devices
  • SecureAuth Identity provider (IdP)
  • 802.1x

Implementing Identity Management

  • Single sign on
    • Centralized access control
    • Allows a subject to be authenticated once on a system and to have access to multiple resources without authenticating again
  • LDAP and centralized access control
    • Directory service
  • LDAP and PKIs
    • LDAP and centralized access control systems can be used to support single sign on capabilities
  • Kerberos
    • Key distribution center
      • KDC
      • Trusted third party that provides authentication services
    • Kerberos authentication server
      • Authentication service verifies or rejects the authenticity and timeliness of tickets
      • KDC
    • Ticket granting ticket
      • TGT
      • Provides proof that a subject has authenticated through a KDC and is authorized to request tickets to access other objects
    • Ticket
      • Encrypted message that provides proof that a subject is authorized to access an object
      • Sometimes called a Service Ticket (ST)
    • Know these processes
    • The Kerberos login process
      1. User types a username and password into a client
      2. The client encrypts the username with AES for transmission to the KDC
      3. The KDC verifies the username against a database of known credentials
      4. The KDC generates a symmetric key that will be used by the client and the Kerberos server, encrypts it with a hash of the user’s password, and generates a time-stamped TGT
      5. The KDC transmits the encrypted symmetric key and the encrypted time-stamped TGT to the client
      6. The client installs the TGT for use until it expires and decrypts the symmetric key using a has of the user’s password
    • The Kerberos ticket request steps
      1. The client sends its TGT back to the KDC with a request for access to the resource
      2. The KDC verifies that the TGT is valid and checks its access control matrix to verify that the user has sufficient privileges to access the requested resource
      3. The KDC generates a service ticket and sends it to the client
      4. The client sends the ticket to the server or service hosting the resource
      5. The server or service hosting the resource verifies the validity of the ticket with the KDC
      6. Once identity and authorization is verified, Kerberos activity is complete, and a session is opened

Federated Identity Management and SSO

  • Single sign on
    • AKA SSO
  • Security Assertion Markup Language
    • SAML
    • An XML-based language that is commonly used to exchange authentication and authorization (AA) information between federated organizations
  • OAuth 2.0
    • Open standard used for access delegation
  • Scripted access
    • Automated process to transmit logon credentials at the start of a logon session
  • Credential management system
    • Storage space for suers to keep their credentials when SSO isn’t available
  • Integrating identity services
    • IDaaS
      • Identity as a service
      • A third party service that provides identity and access management
  • AAA Protocols
    • Identification
    • Authentication
    • Authorization
    • Accountability
  • RADIUS
    • Remote authentication dial in service
    • Centralizes authentication for remote connections
  • TACACS+
    • Terminal access control access control system
    • An alternative to RADIUS
    • + includes moving AAA services into separate processes, encrypting authentication information
    • RADIUS only encrypts password
  • Diameter
    • An enhanced version of RADIUS

Managing the Identity and Access Provisioning Lifecycle

  • Refers to the creation, management, and deletion of accounts
  • Provisioning
    • Creation of new accounts and provisioning them with appropriate privileges
    • Initial creation is called enrollment or registration
    • Should include a background check
    • Many organizations use automated provisioning systems
  • Account review
    • Periodically ensure that security policies are being enforced
    • Check for inactive accounts
    • Excessive privilege
      • When users have more privileges than their assigned work tasks dictate
    • Creeping privileges
      • A user account accumulating privileges over time as job roles and assigned tasks change
    • Excessive and creeping privileges violate the principle of least privilege
  • Account revocation
    • Disable user accounts as soon as possible when employees leave the organization
    • HR personnel should have the ability to perform this task
Read More

CISSP Study Notes Chapter 12 - Secure Communications and Network Attacks

Chapter 12 gets into implementing secure communications channels according to design for voice, multimedia, remote access, data communications, and virtualized networks.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 12: Secure Communications and Network Attacks

My key takeaways and crucial points

Secure Communications Protocols

  • IPSec
    • VPNs
    • Either transport or tunnel mode
  • Kerberos
    • Single sign on solution for users and provides protection for logon credentials
  • SSH
    • End to end encryption
  • Signal Protocol
    • End to end encryption for voice communications, videoconferencing, and text message services
  • Secure Sockets Layer - SSL
    • Session oriented protocol that provides confidentiality and integrity
    • Superseded by TLS

Authentication Protocols

  • Challenge Handshape Authentication Protocol (CHAP)
    • Encrypts usernames and passwords
  • Password Authentication Protocol (PAP)
    • Transmits usernames and passwords in cleartext
  • Extensible Authentication Protocol (EAP)
    • Customized authentication security solutions
  • Protected Extensible Authentication Protocol (PEAP)
    • Encapsulates EAP in a TLS tunnel

Voice over Internet Protocol (VOIP)

  • VOIP
    • Encapsulates audio into IP packets to support telephone calls over TCP/IP
  • Vishing is VoIP phishing

Social Engineering

  • Social engineering - The means by which an unauthorized person gains access or the trust of someone inside your organization
  • Always err on the side of caution
  • Always request proof of identity
  • Require callback authorization on voice-only requests
  • Classify information properly
  • Have a company security policy that is known by employees
  • Offer secure disposal or destruction for sensitive materials

Fraud and Abuse

  • Phreakers - Attack phone systems like attackers abuse computer networks
  • Restrict dial in and dial out features
  • Define an acceptable use policy
  • Deploy Direct Inward System Access (DISA)
  • Telephone attack tools
    • Black boxes
      • Manipulate line voltages to steal long distance services
    • Red boxes
      • Simulate tones of coins being deposited into a pay phone
      • Just a tape recorder
    • Blue boxes
      • Simulate 2600 Hz tones to interact directly with telephone network trunk systems
    • White boxes
      • Control the phone system with a keypad

Manage Email Security

  • X.400 standard for addressing and message handling
  • Sendmail is most common SMTP for Unix systems
  • Exchange is most common SMTP for Windows systems
  • Avoid turning SMTP server into an open relay
    • SMTP server that does not authenticate senders
  • Email security goals
    • Nonrepudiation
    • Privacy, confidentiality
    • Message integrity
    • Classify sensitive content
  • Email security issues
    • Often plaintext
    • Common delivery mechanism for viruses
    • Spoofing source addresses is simple
    • Denial of service attacks
  • Email security solutions
    • Secure Multipurpose Internet Mail Extensions (S/MIME)
      • X.509 certificates
    • Pretty Good Privacy (PGP)
      • Independently developed
    • Opportunistic TLS
  • Fax security
    • Fax encryption, link encryption, activity logs, exception reports
    • Disable automatic printing

Remote Access Security Management

  • Scraping
    • Automated tool interacts with a human interface
  • Authentication protection matters
    • PAP, CHAP, EAP, PEAP, RADIUS, TACAS+
  • Use callback and caller ID
  • Dial Up Protocols
    • Point to Point Protocol (PPP)
      • Full-duplex
    • Serial Line Internet Protocol (SLIP)
      • Older technology
  • Centralized remote authentication services
    • Remote Authentication Dial In User Service (RADIUS)
      • Centralizes authentication of remote connections
    • Terminal Access Controller Access-Control System (TACACS+)
      • Improves XTACACS by adding two factor authentication
      • Not backwards compatible

Virtual Private Network

  • VPN is a communication tunnel
  • Tunneling
    • Network communications process that protects the contents of protocol packets by encapsulating them in packets of another protocol
    • Can be used if the primary protocol is not routable
  • Can connect two individual systems, or two entire networks
  • Common Protocols
    • PPTP, L2F, L2TP operate on data link layer
    • PPTP, IPSec are limited for use on IP networks
  • Point to Point Tunneling Protocol (PPTP)
    • Initial tunnel negotiation process is not encrypted
  • Layer 2 Forwarding Protocol (L2F) & Layer 2 Tunneling Protocol (L2TP)
    • Cisco proprietary
    • Not encrypted (L2F)
    • L2TP combines elements from PPTP and L2F, supports TACACS+ and RADIUS
  • IP Security Protocol
    • Authentication Header (AH)
      • Provides authentication, integrity, nonrepudiation
    • Encapsulating Security Payload (ESP)
      • Provides encryption to protect the confidentiality of transmitted data, but can also perform limited authentication
      • In tunnel mode, the entire IP packet is encrypted

Virtual LAN

  • Hardware imposed network segmentation, created by switches
  • Used for traffic management
  • Used to isolate traffic between network segments
  • Deny by default, allow by exception
  • Broadcast storm - A flood of unwanted Ethernet broadcast network traffic

Virtualization

  • Indistinguishable from traditional servers and services from a user’s perspective
  • Virtual Software
    • Virtual applications/Containers are software products deployed in a way that it is fooled into believing it is interacting with a full host OS
  • Virtual Networking
    • Software defined networking (SDN)
    • Effectively network virtualization

Network Address Translation

  • Private IP addresses are defined in RFC 1918
  • Most networks employ NAT
  • Private IP Addresses
    • 10.0.0.0 - 10.255.255.255 (Class A range)
    • 172.16.0.0 - 182.31.255.255 (Class B range)
    • 192.168.0.0 - 192.168.255.255 (Class C range)
  • Stateful NAT
    • Maintains a mapping between requests made by internal clients, a client’s internal IP address, and the IP address of the internet service contacted
    • Maintains information about the communication sessions between clients and external systems
  • Static NAT
    • Specific internal client IP addresses are assigned in a permanent mapping
  • Dynamic NAT
    • Multiple internal clients access to a few leased public IPs
  • Not directly compatible with IPSec, but there are versions that do work

Automatic Private IP Addressing

  • Automatic Private IP Addressing (APIPA)
    • If DHCP fails to assign an address, this Windows feature assigns an address between 162.254.0.1 to 169.254.255.254, and a class B subnet mask
  • You should be able to convert a 32 bit binary number to a single decimal number
  • RFC 1918 covers loopback addresses
    • 127.x.x.x
    • Usually only 127.0.0.1 is used

Circuit Switching

  • Dedicated physical pathway
  • Pathway is permanent throughout a single conversation

Packet Switching

  • When the message or communication is broken up into small segments and sent across intermediary networks to the destination
  • Does not enforce exclusivity of communication pathways
  • Circuit switching vs Packet switching
Circuit Switching Packet Switching
Constant traffic Burst traffic
Fixed known delays Variable delays
Connection oriented Connectionless
Sensitive to connection loss Sensitive to data loss
Used primarily for voice Used for any type of traffic

Virtual Circuits

  • Private Virtual Circuits are like a dedicated leased line
    • Like a two way radio, or walkie talkie
  • Switched Virtual Circuits are like dial up connections because a virtual circuit has to be created using best paths available before it can be used
    • More like shortwave, or ham radio

WAN Technologies

  • Dedicated line is also known as leased line or point to point link
  • Nondedicated line requires a connection to be established before data transmission can occur
  • X25 WAN Connections
    • Predecessor to Frame Relay
  • Frame Relay
    • Supports multiple PVCs over a single WAN carrier service connection
    • Committed information rate - guaranteed min bandwidth a customer receives
    • Requires a DTE/DCE at each connection point
  • ATM
    • Asynchronous Transfer Mode
    • Fragments communications into fixed length 53 byte cells
    • Very efficient, high throughput
  • Synchronous Digital Hierarchy and Synchronous Optical Network
    • SDH and SONET
    • Fiber optic high speed networking standards

Miscellaneous Security Control Characteristics

  • Transparency
    • The characteristic of a service, control, or access mechanism that ensures that it is unseen by users
  • Verify integrity
    • Uses a checksum, called a hash total
  • Transmission mechanisms
    • Error correction
    • Retransmission

Security Boundaries

  • A line of intersection between any two areas, subnets, or environments that have different security requirements or needs
  • Exist between physical environment and logical environment
  • Ex: A perimeter between a protected area and an unprotected one

Prevent or Mitigate Network Attacks

  • DoS and DDoS
    • Resource consumption attacks
    • Two types
      • Exploiting vulnerability in hardware or software
      • Flood the victim’s communication pipeline
    • Attackers may use bots (aka zombies, agents) onto unwitting systems and use them to participate in traffic flooding
    • Collections of bots are called botnets
  • Eavesdropping
    • Listening to communication traffic
  • Impersonation/Masquerading
    • Pretending to be someone or something you are not, in order to gain unauthorized access to a system
    • Different from spoofing, where an entity puts forth a false identity but without any proof
  • Replay attacks
    • Replaying captured traffic against a system
  • Modification attacks
    • Captured packets are altered
  • Address Resolution Protocol Spoofing
    • Provides false MAC addresses for requested IP address
    • Facilitates man in the middle attacks
  • DNS poisoning, spoofing, hijacking
    • Poisoning and spoofing are known as resolution attacks - when an attacker alters the domain name to IP address mappings to redirect traffic to a rogue system
    • Homograph attacks - Register phony international domain names, letters in Cyrillic
  • Hyperlink spoofing
    • Used to redirect traffic to a rogue system, or simply to divert traffic away from it’s intended destination
    • Alteration of hyperlink URLs in HTML code
  • Related to phishing is pretexting, where one obtains personal information under false pretenses
Read More

CISSP Study Notes Chapter 11 - Secure Network Architecture and Securing Network Components

Chapter 11 goes over a lot of networking topics including the OSI and TCP/IP models, IP networking, multilayer protocols, converged protocols, software-defined networks, wireless networks, and a whole bunch of hardware items.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 11: Secure Network Architecture and Securing Network Components

My key takeaways and crucial points

OSI Model

  • Know the different levels of the OSI model in order
    • Application (7)
    • Presentation
    • Session
    • Transport
    • Network
    • Data Link
    • Physical (1)
    • Come up with a pneumonic device to remember them if you have to - All People Seem To Need Data Processing, for instance
  • Encapsulation/Deencapsulation - The addition of a header and maybe a footer to the data received by each layer from the layer above it before it’s handed off to the layer below, and then the subsequent removal of those headers and footers and the received data flows back up the OSI model on the other end.
  • Pieces of data have different names at different points in the OSI model
    • Protocol data unit - PDU. Application, presentation, session layer data units.
    • Segment or datagram - Transport layer data units.
    • Packet - Network layer.
    • Frame - Data link layer.
    • Bits - Physical layer.

Physical Layer

  • Converts the frame into bits for transmission over the physical connection medium
  • Formatting the packet from the network layer into the proper format for transmission
  • Ethernet - 802.3
  • Asynchronous Transfer Mode - ATM
  • Fiber Distributed Data Interface - FDDI
  • Address Resolution Protocol - ARP - Used to resolve IP addresses into MAC addresses
  • Layer 2 Tunneling Protocol - L2TP
  • Media Access Control address - MAC address, 48 bit binary address that identifies a device
    • First 3 bytes of the address denotes the vendor or manufacturer of the physical network interface
  • Data link layer has two sub-layers
    • Logical Link Control - LLC
    • MAC

Network Layer

  • Adding routing and addressing information to data
  • Internet Group management Protocol (IGMP) - Multicast protocol
  • The three most recognized non-IP protocols are IPX, AppleTalk, and NetBEUI
  • Routers and bridge routers function at layer 3
  • Routing protocols
    • Distance vector - keep a list of destination networks along with metrics of direction and distance measured in hops
    • Link state - keep a topography map of all connected networks and use this map to determine the shortest path to a destination network

Transport Layer

  • Manages the integrity of a connection and controls the session
  • Transmission Control Protocol - TCP
  • User Datagram Protocol - UDP
  • Secure Sockets Layer - SSL
  • Transport Layer Security - TLS

Session Layer

  • Establishes, maintains, terminates communication sessions between two computers
  • Network File System - NFS
  • Structured Query language - SQL
  • Remote Procedure Call - RPC
  • Simplex - One way communication
  • Half-Duplex - Two way communication, one direction at a time
  • Full-Duplex - Two way communication, two directions at the same time

Presentation Layer

  • Transforms data from the application layer into a format that any system using the OSI model can understand
  • Images, video, sound, ASCII, JPEG, etc.

Application Layer

  • Interfacing user applications, network services, or the OS with the protocol stack
  • The application is not located at this layer

TCP/IP Model

  • AKA the DARPA or DOD model
  • Only has four layers
    • Application - aka Process, maps to OSI Application, Presentation, Session layers
    • Transport - aka Host-To-Host, maps to OSI Transport layer
    • Internet - aka Internetworking, maps to OSI Network layer
    • Link - aka Network Access, maps to OSI Data Link, Physical layers
  • Can be secured using VPN - virtual private network
    • L2TP, SSH, SSL/TLS VPNs, IPSec
    • TCP wrappers - Applications that can serve as a basic firewall by restricting access to ports and resources based on user IDs or system IDs

Transport Layer Protocols

  • Ports 0 through 65535
    • Well known ports: 0 - 1023
    • Registered ports: 1024 - 49151
    • Random, dynamic, ephemeral ports: 49152 - 65535
  • Transport Control Protocol - TCP
    • Connection oriented
    • Reliable sessions
    • Handshake process
      • Client sends a SYN packet to server
      • Server responds with SYN/ACK to client
      • Client responds with ACK to server
    • Uses FIN packets to terminate connections
    • Uses ACK packets to confirm that data has been received
    • Uses RST packets to forcibly close connections
    • Uses a graceful 4 packet teardown
      • Each side sends a FIN
      • Each side ACKs the FIN sent by the other
    • IP protocol header field value for TCP is 6
  • User Datagram Protocol - UDP
    • Connectionless, best effort
    • Considered unreliable

Network Layer Protocols and IP Networking Basics

  • IP provides route addressing for data packets, provides a means of identity and prescribes transmission paths
  • IPv4 vs IPv6 - v4 uses 32 bit addresses, v6 uses 128 bits
  • IP classes
Class First binary digits Decimal range of first octet Default subnet mask CIDR equivalent
A 0 1-126 255.0.0.0 /8
B 10 128-191 255.255.0.0 /16
C 110 192-223 255.255.255.0 /24
D 1110 225-239    
E 1111 240-255    
  • Class A network starting with 127 is set aside for loopback address
  • Classless Inter-Domain Routing - CIDR, represents subnet mask as a slash and the number of mask bits instead of the full dotted-decimal mask notation
  • Internet Message Control Protocol - ICMP
    • Used to determine the health of a network or link
    • Denial of service - DOS, a type of attack sometimes associated with ICMP - specifically ping of death, smurf attacks, and ping floods
    • ICMP type field values
      • 0 - Echo reply
      • 3 - Destination unreachable
      • 11 - Time exceeded
  • Internet Group Management Protocol - IGMP, allows systems to support multicasting
  • Address Resolution Protocol - ARP, used to resolve IP addresses into MAC addresses
    • Uses caching and broadcasting

Common Application Layer Protocols

  • Telnet - TCP port 23, terminal emulation
  • File Transfer Protocol - FTP TCP port 20 (passive data) and 21 (control connection) for exchanging files
  • Trivial File Transfer Protocol - TCP TFTP port 69
  • Simple Mail Transfer Protocol - SMTP TCP port 25, for transmitting email messages
    • POP3, TCP port 110
    • IMAP, TCP port 143
  • Dynamic Host Configuration Protocol - DHCP UDP port 67 & 68, used to assign IP addresses and configuration settings to systems on bootup
  • Hypertext Transfer Protocol - HTTP TCP port 80, used to transmit web page elements
  • Secure Sockets Layer - SSL TCP port 443, adds a security protocol at the Transport layer to HTTP (making it HTTPS)
  • Line Print Daemon - LPD TCP port 515, spool print jobs
  • X Window - TCP port 6000-6063, GUI API for CLI OS
  • Network File System - NFS TCP port 2049
  • Simple Network Management Protocol - SNMP UDP port 161, port 162 for trap messages, network service to collect network health and status information
  • DNS port 53
  • Kerberos port 88
  • L2TP port 1701
  • PPTP port 1723
  • RDP port 3389
  • TCP/IP is considered a multilayer protocol because it is made up of many different protocols spread across multiple layers of the stack

DNP3

  • Distributed Network Protocol is primarily used in electric and water utility management industries
  • Used with SCADA

TCP/IP Vulnerabilities

  • SYN flood attacks
  • Spoofing
  • Man in the middle
  • Hijack
  • Packet Sniffing - Capturing packets from the network in hopes of extracting useful information from the contents of the packet

Domain Name System (DNS)

  • Top level domain - TLD, the .ca in www.thomasrayner.ca
  • Registered domain name - The thomasrayner in www.thomasrayner.ca
  • Subdomain or hostname - The www in www.thomasrayner.ca
  • Primary authoritative name server - Hosts the original zone file for the domain
  • Secondary authoritative name server - Used to host read-only copies of the zone file
  • Zone file - Collection of resource records or details about the specific domain
  • DNSSEC provides reliable authentication between devices during DNS operations
  • DNS Poisoning - Falsifying the DNS information used by a client to reach a desired system
    • Involves attacking the real DNS server and placing incorrect information into its zone file
  • Rogue DNS server - AKA DNS Spoofing, Pharming
  • Pharming - malicious redirection of a valid website’s URL or IP address to a fake website that hosts a false version of the original valid site
  • Domain hijacking - Changing the registration of a domain name without the authorization of the valid owner

Converged Protocols

  • Converged Protocols - Merging of specialty protocols with standard protocols
  • Fiber Channel over Ethernet - FCoE, for Storage Area Networks, or Network Attached Storage
  • Multiprotocol Label Switching - MPLS, Directs data across a network based on short path labels rather than longer network addresses
  • Internet Small Computer System Interface - iSCSI, Enables location independent file storage, transmission and retrieval over networks, low cost alternative to Fiber Channel
  • Voice over IP - VoIP, Transports voice and/or data over a TCP/IP network
  • Software Defined Networking - SDN, Complexities of a traditional network often forces an organization to stick with a single device vendor, so SDN offers a net network design that is programmable from a central location

Content Distribution Networks

  • Content Distribution Networks - CDNs, a collection of resource services deployed in numerous data centers across the internet in order to provide low latency, high performance, and high availability of the hosted content

Wireless Networks

  • Data emanation - Transmission of data across electromagnetic signals
  • Emanations occur whenever electrons move

Securing Wireless Access Points

  • Wireless cells - Areas within a physical environment where a wireless device can connect to an access point
  • 802.11 is the IEEE standard for wireless network communications
    • 802.11i - Security standard
  • Ad hoc mode - Any two wireless devices can communicate without centralized control authority
    • Infrastructure mode requires a wireless access point
  • Stand alone mode - When there is a wireless access point connecting clients to each other but not to any wired resources
  • Wired extension mode - When access points act as a connection point to wired networks
  • Service Set Identifier - SSID, the name of a wireless network when a WAP is used
  • Channels - Subdivisions of wireless frequencies

Securing the SSID

  • SSID is broadcast by the WAP using beacon frames
  • Hiding the SSID is not true security, because it is easily discoverable
  • Site survey - The process of investigating the presence, strength, and reach of WAPs deployed

WEP

  • Wired Equivalent Privacy - WEP
  • Uses a predefined shared secret key
  • Key is static and shared among all WAPs and devices
  • Was cracked almost as soon as it was released
  • Uses Rivest Cipher 4 (RC4)
  • Weaknesses: static common key, and poor implementation of IVs (initiation vectors)

WPA

  • WiFi Protected Access - WPA
  • RSN - Robust Secure Network
  • Based on LEAP and Temporal Key Integrity Protocol (TKIP)
  • Use of a single static passphrase is the downfall of WPA
  • LEAP and TKIP encryption options are now crackable

WPA2

  • Full 802.11i implementation
  • Based on AES encryption

802.1X/EAP

  • Standard port-based network access control, ensures that clients cannot communicate until proper authentication has taken place
  • Uses RADIUS or TACAVS, certs, smart cards, etc.
  • Extensible Authentication Protocol - EAP, not a specific mechanism of authentication

PEAP

  • Protected Extensible Authentication Protocol
  • EAP methods within a TLS tunnel

LEAP

  • Lightweight EAP
  • Cisco proprietary
  • Should be avoided when possible

MAC Filter

  • A list of authorized wireless client interface MAC addresses
  • Blocks access to nonauthorized devices

TKIP

  • Temporal Key Integrity Protocol
  • Improvements include key-mixing function that combines with the initialization vector with the secret root key before using RC4 to perform encryption
  • Prevents replay attacks

Antenna Types

  • Omnidirectional - Can send and receive signals in all directions
  • Directional - Can send and receive in only one direction

WPS

  • WiFi Protected Setup
  • Simplifies the effort involved with adding new clients to a well secured wireless network
  • Generally recommended to leave this turned off

Captive Portals

  • An authentication technique that redirects a newly connected wireless client to a portal access control page

General WiFi Security Procedure

  • Treat wireless as remote access
  • Treat wireless as external access

Wireless Attacks

  • War driving - Looking for wireless networks they aren’t authorized to access
  • War chalking - Physically marking an area with information about the presence of a wireless network
  • Replay - Retransmission of captured communications
  • IV - Initialization vector, a mathematical and crypto term for a random number, becomes a point of weakness when it’s too short, exchanged in plaintext, or selected improperly
  • Rogue access points - May be planted by an employee for convenience, or by an attacker
  • Evil twin - When a hacker operates a false access point that will automatically clone an access point based on a client’s request to connect
    • Eavesdrops on the wireless signal for reconnect requests and spoofs it’s identity and offers a plaintext connection to the client

Secure Network Components

  • Intranet - Private network, internal
  • Extranet - Cross between internet and intranet
  • Demilitarized zone - DMZ, extranet for public consumption
  • Network access control - Controlling access to an environment
    • Prevent/reduce zero day attacks
    • Enforce security policy
    • Use identities to perform access control
  • Firewalls filter traffic
    • Most effective against unrequested traffic and attempts to connect from outside the private network
    • Typically block viruses or malicious code
    • Static packet filtering firewalls filter traffic by examining message headers
    • Application gateway level firewalls are also called proxies, and are mechanisms that copy packets from one network to another
    • Circuit level gateway firewalls establish communication sessions between trusted partners
    • Stateful inspection firewalls, aka dynamic packet filtering, evaluate the state or the context of network traffic
    • Deep packet inspection firewalls filter the payload contents of a communication rather than only basing filtering on header values
    • Next gen firewalls are also composed of intrusion prevention systems, proxies, quality of service management, and more
    • Multihomed firewalls have at least two interfaces to filter traffic between two networks

Endpoint Security

  • Each individual device must maintain local security
  • Collision domain - A group of networked systems, and two or more systems may transmit simultaneously
  • Broadcast domain - A group of networked systems, and all members receive a broadcast signal when one member transmits it
  • Repeaters, concentrators, and amplifiers strengthen communication signals
  • Hubs are multiport repeaters
  • Modems are used for accessing the PSTN (publicly switched telephone network)
  • Bridges connect two networks together
  • Switches are intelligent hubs that know the addresses of systems connected to each outbound port
    • Can implement VLANs
  • Routers control traffic flow on networks
  • Brouters are combinations of routers and bridges, and connect systems using the same protocols
  • Gateways connect networks that are using different network protocols
  • Proxies are a form of gateway that does not translate across protocols

Cabling, Wireless, Topology, Communications, and Transmission Media Technology

  • Coaxial cable
    • 10Base2 aka thinnet spans distances up to 185 meters and provides up to 10 Mbps
    • 10Base5 aka thicknet spans up to 500 meters and provides 10 Mbps
  • Broadband and baseband
    • Baseband cables can only transmit a single signal at a time
    • Broadband cables can transmit multiple signals simultaneously
  • Twisted pair
    • STP (shielded)
    • UTP (unshielded)
    • Crosstalk occurs when data transmitted over one set of wires is picked up by another set of wires due to radiating electromagnetic fields produced by the electrical current
    • Cat 5 - 100 Mbps
    • Cat 6 - 1000 Mbps
    • Cat 5e is enhanced Cat 5 designed to protect against crosstalk
  • Plenum cable is sheathed with special material that does not release toxic fumes when burned, must be used to comply with building codes

Network Topologies

  • Ring - Each system is a point on a circle
  • Bus - Trunk or backbone, linear or a tree
  • Star - Centralized connection device
  • Mesh - Using numerous paths to connect each system to all other systems

General Wireless Concepts

  • Spread spectrum means communication occurs over multiple frequencies
  • Orthogonal frequency division multiplexing
    • Modulated signals are perpendicular and thus do not cause interference with each other
    • Ultimately OFDM requires a smaller frequency set but offers greater data throughput
  • Bluetooth (802.15)
    • Personal area network (PAN)
    • 2.4 GHz frequencies
    • Bluejacking - Allows an attacker to transmit short messages to your device
    • Bluesnarfing - Allows hackers to connect with your devices without your knowledge
    • Bluebugging - Remote control over the feature and functions of a Bluetooth device
    • Typically a range of 30 - 100 feet
  • RFID
    • Radio frequency identification
    • Current generated in an antenna when placed in a magnetic field
  • NFC
    • Near field communication
    • Derivative of RFID
  • Mobile devices
    • Keep nonessential information off portable devices
    • Keep systems locked and encrypted when possible

LAN Technologies

  • Ethernet - Shared media LAN technology
    • IEEE 802.3
    • Individual units of Ethernet data are called Frames
  • Token Ring - Token passing mechanism to control which system scan transmit data over the network medium
  • Fiber distributed data interface - FDDI, high speed token passing technology. Copper Distributed Data Interface uses twisted pair cables.
  • Digital signals are more reliable than analog signals over long distances or when interference is present
  • Synchronous communications - Rely on timing or clocking mechanisms
  • Asynchronous communications - Rely on a stop and start delimiter bit
  • Broadcast - Communication to all recipients
  • Multicast - Communication to multiple specific recipients
  • Unicast - Communication to one specific recipient
  • Carrier sense multiple access with collision avoidance - CSMA
    • Performs collision avoidance on LAN media access technologies
    • CA mode is used on 802.11 wireless networks and AppleTalk, attempts to avoid collisions by granting only a single permission to communicate at a time
    • CD mode is used by Ethernet networks and responds to collisions by having each member of a collision domain wait for a short but random period of time before starting over
Read More

CISSP Study Notes Chapter 10 - Physical Security Requirements

Chapter 10 covers implementing site and facility security controls, designing sites and facilities, and generally protecting things from physical threats.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 10: Physical Security Requirements

My key takeaways and crucial points

Apply Security Principles to Site and Facility Design

  • Secure facility plan - Outlines the security needs of your organization and emphasizes methods or mechanisms to employ to provide security
  • Critical path analysis - Identifying relationships between mission-critical applications, processes, and operations
  • Technology convergence - The tendency for various technologies, solutions, utilities, and systems to evolve and merge over time

Site Selection

  • Visibility - Where are the closest emergency services located? Are there unique hazards? Locations of security cameras.
  • Natural disasters
  • Facilities design - Crime Prevention through Environmental Design (CPTED)

Implement Site and Facility Security Controls

  • Administrative physical security controls - Include facility construction and selection, personnel controls, awareness training, emergency response
  • Technical physical security controls - Include access controls, intrusion detection, alarms, monitoring, heating, ventilation, HVAC, power supplies, fire detection and suppression
  • Physical physical security controls - Includes fences, lighting, locks, mantraps, dogs, guards
  • Functional order controls should be used:
    1. Deterrence - Boundary restrictions
    2. Denial - Locked vault doors
    3. Detection - Using motion detectors
    4. Delay - Cable lock on a laptop

Equipment failure

  • Service level agreements (SLA) - Defines vendor response time
  • Mean time to failure (MTTF) - Expected typical functional lifetime of the device
  • Mean time to repair (MTTR) - The average length of time required to perform a repair
  • Mean time between failures (MTBF) - Estimation of the time between the first and any subsequent failures

Wiring Closets

  • An element of cable plant management policy
  • Entrance facility - aka demarcation point, where the cable enters the building
  • Equipment room - Main wiring closet
  • Backbone distribution system - Between equipment room and telecommunication rooms
  • Telecommunications room - Serves the connection needs of a floor or a section of a large building
  • Horizontal distribution system - Between the telecommunication room and work areas

Server Room

  • The more human incompatible a server room is, the more protection it offers against casual and determined attacks
  • Walls should have one-hour minimum fire rating
  • Datacenter could be a single tenant or multitenant
  • Smartcards - Credit-card-sized, IDs, badges, security passes with an embedded magnetic strip, bar code, or integrated circuit chip
  • Memory cards - Machine-readable ID cards with a magnetic strip
  • Proximity reader - Passive device, a field-powered device, or a transponder
  • Passive device - Like antitheft devices found in DVDs
  • Intrusion detection system - Systems designed to detect a breach or attack
  • Masquerading - Using someone else’s security ID to gain entry into facilities
  • Piggybacking - Following someone through a secured gate or doorway without being identified or authorized personally
  • Emanation - Electromagnetic signals or radiation that can be intercepted by unauthorized individuals
  • Faraday cage - An area designed with an external metal skin that surrounds the area on all sides and blocks electromagnetic interference (EMI)
  • White noise - Random sounds, signal, or process that can drown out meaningful information
  • Control zone - is an implementation of a Faraday cage or white noise generator or both to protect a specific area

Media storage facilities

  • Data remnants - The remaining data elements left on a storage device after a standard deletion or formatting
  • Have a librarian or custodian
  • Implement drive sanitization or zeroization
  • Verify data integrity with hash-based integrity checks
  • Limit storage access, especially to evidence

Restricted and work area security

  • Shoulder surfing - Gathering information from a system by observing the monitor or the use of the keyboard by the operator
  • Sensitive Compartmented Information Facility (SCIF)

Utilities and HVAC Considerations

  • Uninterruptible power supply - UPS. A type of self-charging battery that can be used to supply consistent clean power
  • Line interactive UPS have surge protectors, voltage regulators
  • Fault - Momentary loss of power
  • Blackout - A complete loss of power
  • Sag - Momentary low voltage
  • Brownout - Prolonged low voltage
  • Spike - Momentary high voltage
  • Surge - Prolonged high voltage
  • Inrush - Initial surge of power associated with connecting to a power source
  • Noise - A steady interfering power disturbance or fluctuation
    • EMI - electromagnetic interference has two types
      • Common mode noise is generated by a difference in power between the hot and ground wires
      • Traverse mode noise is generated by a difference in power between the hot and neutral wires
    • Radio frequency interference - RFI, electrical appliances generate RFI like fluorescent lights
  • Transient - A short duration of line noise
  • Clean - Nonfluctuating pure power
  • Ground - The wire in an electrical circuit that is grounded

Temperature, Humidity, and Static

  • Humidity in a computer room should be maintained between 40-60%
  • 1500 volts causes destruction of data stored on hard drives

Water Issues (e.g., Leakage, Flooding)

  • Water and electricity don’t mix

This is seriously the only thing I have highlighted in this three paragraph section of the book

Fire Prevention, Detection, and Suppression

  • Protecting personnel from harm should always be the most important goal of any security or protection system
  • Fire extinguisher classes
Class Type Suppression material
A Common combustibles Water, soda acid
B Liquids CO2, halon, soda acid
C Electrical CO2, halon
D Metal Dry powder
K Kitchen (for grease fires, like B) CO2, halon
  • Fixed temperature detection - Triggers suppression when a specific temperature is reached
  • Rate of rise detection - Triggers detection if a temperature increases at a specific speed
  • Flame actuated system - Triggers suppression based on infrared energy of flames
  • Smoke actuated system - Uses photoelectric or radioactive ionization sensors
  • Water suppression systems
    • Wet pipe - Always full of water
    • Dry pipe - Air escapes opening a water valve
    • Deluge - A form of dry pipe that uses larger pipes
    • Preaction - Combination of dry and wet pipe, system is dry until initial stages of fire when pipes are filled with water. Water is only released after sprinkler head activation triggers are melted by heat.
      • Most appropriate water-based system for environments with both humans and computers
  • Water system failures are most often caused by human error
  • Gas discharge systems
    • More effective than water discharge, but shouldn’t be used where people are located
    • They deploy halon (not since it was banned), Co2, or FM-200 (halon replacement)

Implement and Manage Physical Security

  • Fences 3-4 feet high deter casual trespassers
  • Fences 6-7 feet high discourage most intruders except determined ones
  • Fences 8 feet or more high with three strands of barbed wire deter even determined intruders
  • Gates - Controlled exit and entry points in a fence
  • Turnstile - Restricts movement in one direction
  • Mantrap - A double set of doors that is often protected by a guard
  • Perimeter protection needs lights with 2 foot-candles of power
  • If a lighted area is 40 feet in diameter, poles should be 40 feet apart
    • 5 feet apart in parkades
  • You need to be able to handle visitors
  • Lock - A crude form of identification and authorization mechanism
  • Electronic access control lock - Has an electromagnet to keep the door locked, a credential reader for authentication, and a sensor to reengage the electromagnet when the door is closed
  • Local alarm systems - Must broadcast an audible alarm heard up to 400 feet away (up to 120 decibel)
  • Central station system - Usually silent locally, but offsite monitoring agents are notified
  • Auxiliary station - Emergency services are notified
  • Closed circuit TV is not an automated detection and response system, it needs people watching it
  • Human safety is always the most important factor
  • Privacy means protecting personal information from discloser - NIST 800-122, GDPR
    • You are usually obliged to have physical security controls
Read More

CISSP Study Notes Chapter 9 - Security Vulnerabilities, Threats, and Countermeasures

Chapter 9 gets into assessing and mitigating the vulnerabilities of security architectures, designs, and solution elements. It also talks about assessing and mitigating vulnerabilities in web-based systems, mobile systems, and embedded devices.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 9: Security Vulnerabilities, Threats, and Countermeasures

My key takeaways and crucial points

Processor

  • Computer architecture - Engineering discipline related to design and construction of computer systems at a logical level
  • Processor - Central Processing Unit (CPU). The chip that governs all major operations and either performs or coordinates calculations that allow a computer to perform its tasks
    • Execution types:
      • Multitasking - Handling two or more tasks simultaneously
      • Multicore - A single microprocessor chip that contains independent execution cores
      • Multiprocessing - A computing system with more than one CPU uses the power of more than one processor to complete a multithreaded operation
      • Massively parallel processing - MPP. Hundreds or more processors with their own OS and resources, coordinated by software to perform in unity
      • Multiprogramming - Similar to multitasking
      • Multithreading - Multiple concurrent tasks are performed within a single process - multitasking involves multiple processes
    • Processing types:
      • Single state - Use the policy mechanisms to manage information at different levels
      • Multistate - Implement higher levels of security by using mechanisms
        • Protection rings - Organize code and components in an OS into concentric rings.
          • Deep inside ring is 0, highest level of privilege and can access anything.
          • Kernel - Part of the OS that remains resident in memory so it can be run on demand at any time. Lives in ring 0.
          • Ring 1 houses the rest of the OS
          • Ring 2 is somewhat privileged, I/O drivers and other system utilities reside
          • Applications and programs live in Ring 3
        • Process states - Aka operating states
          • Supervisor state - privileged, all access
          • Problem state - Associated with user mode where privileges are low and access requests must be checked
          • Ready - Process is ready to resume or start processing as soon as it is scheduled
          • Waiting - Process is waiting for a resource, waiting for continued execution but a device or access request is to be serviced first
          • Running - Executing on the CPU and goes until it finishes or it’s time slice expires or it is blocked
          • Supervisory - When the process must perform an action that requires privileges that are greater than the problem state’s set of privileges
          • Stopped - When a process finishes or errors out, it becomes “stopped” so the OS can recover memory and other resources
  • Security modes
    • See chapter 1 for data classification
    • Dedicated mode - Single state system where each user must have a security clearance that permits access to all information on the system, and need to know for all information
    • System high mode - Each user needs a valid security clearance that covers all information on the system, but only need to know for some information
    • Compartmented mode - Same as above, but they only need approval to access some information, not all
    • Multilevel mode - Same as above, except users don’t need clearance for all information, just pursuant to it’s classification
  • Operating modes
    • User mode - The basic mode a CPU uses when executing user applications
    • Privileged mode - Gives the OS access to the full range of instructions supported by the CPU

Memory

  • Read-Only memory - ROM. Memory the PC can read but cannot change
  • Programmable Read-Only memory - PROM. Similar to a ROM chip but can be programmed separately from manufacturing, but only once
  • Erasable Programmable Read-Only memory - EPROM. Ultraviolet EPROMs can be erased with a light
  • Electronically Erasable Programmable Read-Only memory - EEPROM. Electronically erasable PROM
  • Flash memory - Derivative from EEPROM, can be electronically erased and rewritten, can be erased in blocks or pages
  • Random Access Memory - RAM. Readable and writable memory that contains information a computer uses during processing
    • Real memory - The largest RAM storage resource available, made of dynamic RAM chips
    • Cache RAM - Caches that improve performance by taking data from slower devices and storing it temporarily in faster devices when repeated use is likely
  • Registers - Limited onboard memory in a CPU
  • Memory addressing
    • Register addressing - An identifier for a register
    • Intermediate addressing - A way of referring to data that is supplied to the CPU as part of an instruction
    • Direct addressing - CPU is provided with an actual address of the memory location to access
    • Indirect addressing - Similar to direct, however the address supplied doesn’t contain the actual value, it contains another memory address
    • Base+Offset addressing - Uses a value stored in one register as the base from which to begin counting, and then adds the offset supplied to retrieve a value from that computed memory location
  • Secondary memory - Usually magnetic, optical, flash-based, or other storage that contains data not immediately available for the CPU - Hard drives, DVDs, etc.

Storage

  • Data storage devices - Used to store information that may be used any time after it’s written
  • Primary vs secondary - Primary storage is RAM. Secondary is long-term storage like hard drives.
  • Volatile vs non-volatile - Volatile devices lose their data when they lose power
  • Random vs sequential - Random access storage allow an OS to read immediately from any point. Sequential storage doesn’t.
  • Storage media security
    • Data remanence - Data may remain on devices after it is erased
    • SSDs can only be sanitized by destroying them
    • Prone to theft, important to use full disk encryption

Input and Output Devices

  • Monitors - Primary concern is shoulder surfing - looking over the shoulder or at the screen to observe information displayed
  • Printers - Users may leave sensitive printouts, some printers store data locally and can re-print data
  • Keyboards/mice - Key loggers may intercept and record key strokes and transmit them, revealing passwords and other sensitive information
  • Firmware - Software stored on a ROM chip
    • BIOS - Basic input/output system. Contains OS independent primitive instructions that a computer needs to start up and load the OS from disk.
    • Device firmware also needs limited processing

Client-Based Systems

  • Client-side attack - Any attack that is able to harm a client, may occur over any protocol
  • Applets - Code objects that are sent from a server to a client to perform some action - mini programs
  • Local cache - Anything that is temporarily stored on the client for future reuse - see ARP caches, the chapter goes into a ridiculous amount of detail on ARP attacks and DNS cache poisoning

Server-Based Systems

  • Data flow control - The movement of data between process, between devices, across a network, or over communication channels
    • Making sure endpoints are not overwhelmed with traffic - maybe by using a load balancer
  • Denial of service attack - Attacking the availability of a resource or system, often by attacking data flow control

Database Systems Security

  • Aggregation - Combining records from one or more tables to produce potentially useful information
    • An attacker might be able to take multiple pieces of seemingly innocuous information, and combine them to infer something more dangerous
  • Inference - Combining several pieces of non-sensitive information to gain access to information that should be classified at a higher level
  • Data mining & data warehousing - Warehouses are large databases, data dictionaries are used for storing critical information about data, mining allows analysts to comb through warehouses and look for correlated information
  • Data analytics - The science of raw data examination with the focus of extracting useful information out of the bulk information set
    • Big data - Collections of data that have become so large that traditional means of processing aren’t enough
  • Large-Scale parallel data systems - Systems designed to perform numerous calculations simultaneously

Distributed Systems and Endpoint Security

  • Distributed architecture - The concept of a client-server model network
    • Email needs to be screened
    • Download/upload policies must be created
    • Systems must be subject to robust access controls
    • Restrict user-interface mechanisms and database management systems
    • File encryption may be appropriate, but full disk encryption is also needed
    • Separate and isolate processes that run in user and supervisory mode
    • Protection domains should be created
    • Sensitive materials must be labeled
    • Files on user machines should be backed up
    • Users need regular security awareness training
    • Computers and their storage media require protection from environmental hazards
    • User computers should be included in disaster recovery and business continuity planning
    • Developers of custom software need to take security into account
  • Cloud computing - Referring to computing where processing and storage are performed elsewhere over a network connection rather than locally - Depends on virtualization
    • Type 1 hypervisor - Native or bare-metal hypervisor, there is no host OS, the hypervisor installs directly on the hardware
    • Type 2 hypervisor - Hypervisor software is installed on top of another OS
    • Cloud storage - Using storage provided by a cloud vendor to host data for an organization
    • Elasticity - The flexibility to expand or contract based on need
    • Platform as a service - PaaS. Providing a computing platform and software stack as a virtual service
    • Software as a service - SaaS. Providing on-demand online access to software applications without need for local installation
    • Infrastructure as a service - IaaS. Providing on-demand outsourcing options for operating solutions
    • Hosted solution - A deployment concept where the organization must license software and then operates and maintains the software - the hosting provider takes care of the hardware that supports the organization’s software
    • Private cloud - Service within a corporate network and isolated from the internet, internal use only
    • Public cloud - Services are accessible to the general public over an internet connection, usually paid for with subscriptions or pay-per-use, could be free. Data is separated and isolated from other customers, but the overall purpose of the cloud is the same for all customers
    • Hybrid cloud - A mix of public and private
    • Community - A shared environment used and paid for by a group of users or organizations for shared benefit
  • Grid computing - Parallel distributed processing that groups processing nodes to work towards a specific goal
  • Peer to peer - Share tasks and workloads among peers, like grid computing, but there is no central management system

Internet of Things

  • Smart devices - Mobile devices that have customization options, apps, on-device or in-cloud AI
  • Internet of things - IoT. A class of smart devices that are internet-connected to provide automation, remote control, or AI processing to traditional or new devices
  • Often not designed with security in mind

Industrial Control Systems

  • Industrial control system - ICS. Computer management device that controls industrial processes an machines
  • SCADA - Supervisory control and data acquisition. Can operate as a stand-alone device, or networked together with other systems.

Assess and Mitigate Vulnerabilities in Web-Based Systems

  • Open web application security project - OWASP. A nonprofit security project focused on improving security for online applications. Also a large community.
    • See their Top 10 critical attacks for an idea of what’s hot right now
  • Injection attack - Exploitation where an attacker can submit code to a system to modify its operations
    • SQL injection are related to SQL queries and databases
  • Input validation - Limiting the types of data a user provides in a form. Input sanitization can happen here and should include escaping metacharacters.
    • Escaping metacharacters - The process of marking the metacharacter as merely a normal or common character, such as a letter or number, thus removing it’s special programmatic powers
  • Limit account privileges - the database account that the web server uses should only have the rights to do the limited operations it’s expected to perform
  • Directory traversal - An attack that enables an attacker to jump out of the web root directory structure and into other parts of the filesystem hosted by the web server
  • Cross-site request forgery - XSRF. An attack similar to XSS (cross site scripting) where the purpose is to trick the user or browser into performing actions they had not intended or would not have authorized

Assess and Mitigate Vulnerabilities in Mobile Systems

  • Malicious insiders can bring in malicious code from outside, or exfiltrate information using mobile devices
  • Eavesdropping is possible
  • Full device encryption - Encrypt the entire device so that if it is lost, the data should be safe if the device is not unlocked
  • Remote wiping - The ability to sanitize a device if it is lost or stolen, without having physical access to the device
  • Lockout - When a user fails to provide their credentials after repeated attempts, the device may implement a cooldown period, or become locked forever
  • Screen locks - Designed to prevent someone from casually picking up a device and using it
  • GPS - Global Positioning System. Can be used to track the location of devices
  • Application control - Device management solution that allows administrators to limit which applications can be installed on a device
  • Storage segmentation - Artificially compartmentalizing types of data on a storage medium - ex. separating the device OS and installed apps
  • Asset tracking - Management process used to maintain oversight over an inventory
  • Inventory control - Hardware asset tracking, and the concept of using a mobile device as a means of tracking inventory in a warehouse
  • Mobile device management - Software solution for managing mobile devices over a communications channel
  • Device access control - Blocking unauthenticated access to a device
  • Removable storage - Should also be encrypted, if allowed
  • Disabling unused features - Remove apps and features that aren’t needed for business tasks or common personal use
  • Key management - Most crypto failures are from poor management, not poor algorithms. Have good key selection, based on quality random numbers.
  • Credential management - Storage of credentials in a central location
  • Geotagging - Using GPS to embed the location of a photo or other file
  • Application allow-listing - Prohibiting unauthorized software from executing. Deny by default, implicit deny.
  • BYOD - Bring your own device.
  • COPE - Corporately owned, personally enabled.
  • CYOD - Choose your own device.
  • Data ownership - Commingling personal data and business data is likely to occur when using personal devices for business tasks, or vice versa.
    • Segmentation can help, isolation
  • Acceptable use policy - Critical to define what is and is not acceptable usage while mixing personal activities and business tasks

Assess and Mitigate Vulnerabilities in Embedded Devices and Cyber-Physical Systems

  • Embedded system - A computer implemented as part of a larger system, providing a limited set of specific functions related to the system itself
  • Static systems - A set of conditions that don’t change
  • Cyber-physical systems - Offer computational means to control something in the physical world, often tied to IoT
  • Security by:
    • Network segmentation - Keep these devices isolated from other systems
    • Security layers - Devices with different levels of sensitivity are grouped together and isolated
    • Application firewalls - Define a strict set of communication rules for a service and all users
    • Manual updates - Used in static environments to ensure that only tested updates, patches, authorized changes are implemented
    • Firmware version control - same as above, manual updates
    • Wrappers - Something used to enclose or contain something else, related to Trojan horse malware, but also encapsulation solutions
    • Monitoring - All devices must be monitored
    • Control redundancy and diversity - Especially for systems that are dangerous or whose outages may cause loss of life, you must be able to sustain an attack or outage that diminishes capacity

Essential Security Protection Mechanisms

  • Software should not be trusted
  • Layering - Implement a structure similar to a ring model
  • Abstraction - Fundamental to object oriented programming, users of an object don’t need to know the details of how the object works
  • Data hiding - This is different from security by obscurity which is a bad practice. This is putting data in a location that it is not unnecessarily visible.
  • Process isolation - Requires the OS to provide separate memory spaces for each process’s instructions and data
  • Hardware segmentation - Similar to process isolation in purpose, but for hardware

Security Policy and Computer Architecture

  • Principle of least privilege - Discussed more in Chapter 13, users and resources should never have any more privileges than they need to do their assigned function. Avoid over-provisioning permissions.
  • Separation of privilege - Builds on least privilege, using granular access permissions for each type of privileged operation so some can be assigned and others can be restricted - like separation of duties
  • Accountability - You must be able to log everything and determine who or what is responsible for activities performed

Common Architecture Flaws and Security Issues

  • Covert channels - A method that is used to pass information over a path that is not normally used for communication
    • Covert timing channel - Conveys information by altering the performance of a system component or modifying a resource’s timing in a predictable manner - hard to detect
    • Covert storage channel - Conveys information by writing data to a common storage area where another process can read it
  • Attacks based on design or coding flaws and security issues - See the OWASP section above
  • Trusted recovery - Ensures all security controls remain in place in the event of a crash
  • Input and parameter checking - Input sanitization, avoid buffer overflows, and many other issues addressed above
  • Maintenance hooks and privileged programs - Entry points into a system that are known only by the developer of the system, also known as back doors
  • Incremental attacks - Slow, gradual attack in increments
    • Data diddling - When an attacker gains access to a system and makes small, random, incremental changes to data during it’s lifecycle rather than something big and obvious
    • Salami attack - Systemic whittling at assets in accounts or other records with financial value where small amounts are deducted from balances regularly
  • Timing, state changes, and communication disconnects - Computers operate with precision and that’s a weakness
    • Time of check - TOC, the time at which a subject checks the status of an object
    • Time of use - TOU, the time at which a subject uses an object
    • TOCTTOU attack - Time of check to time of use attack, replacing a data file after the identity has been verified, but before it has been used by a subject
  • Electromagnetic radiation - EM radiation, reduce eminence through shielding
Read More

CISSP Study Notes Chapter 8 - Principles of Security, Models, Design, and Capabilities

Chapter 8 covers implementing and managing engineering processes using secure design principles, the fundamental concepts of security models, how to select controls based on security requirements, and understanding security capabilities of information systems.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 8: Principles of Security, Models, Design, and Capabilities

My key takeaways and crucial points

Objects and Subjects

  • Subject - The user or process that makes a request to access a resource
  • Object - The resource a subject wants to access
  • Transitive trust - If A trusts B, and B trusts C, then A inherits trust of C through B
    • May enable bypassing of restrictions or limits between A and C

Closed and Open Systems

  • Closed system - Designed to work well with a narrow range of other systems
  • Open system - Designed using agreed-upon industry standards
  • This is different than closed/open source
  • Attacking a closed system is harder

Techniques for Ensuring CIA

  • Confinement - Allows a process to read from and write to only certain memory locations and resources
  • Bounds - Processes are assigned an authority level
    • Bounds of a process consist of limits set on the memory locations and resources it can access
  • Isolation - Enforcing access bounds. Used to protect the operating environment, the kernel of the OS, and other applications
  • Controls - Uses access rules to limit the access of a subject to an object
    • MAC - Mandatory access control
    • DAC - Discretionary access control
    • Chapter 14 is all about this topic
    • Each subject has attributes that define clearance/authority/access to resources
    • Each object has attributes that define it’s classification
  • Trust and Assurance - Security issues should not be added on as an afterthought
    • Trusted system - One where all protection mechanisms work together to process sensitive data for different users while maintaining a secure computing environment.
    • Assurance - The degree of confidence in the satisfaction of security needs
    • When change occurs, the system needs to be reevaluated

Understand the Fundamental Concepts of Security Models

  • Security model - A way for designers to map abstract statements into a security policy
  • Token - A separate object associated with a resource and describes its security attributes
  • Capabilities list - Maintains a row of security attributes for each controlled object
  • Security label - A permanent part of the object to which it’s attached
  • Trusted computing base - TCB. A combination of hardware, software, and controls that work together to form a trusted base to enforce your security policy
  • Security perimeter - An imaginary boundary that separates Trusted Computing Base from the rest of the system
  • Trust paths - Secure channels for Trusted Computing Base to talk to the rest of the system
  • Reference monitor - Part of TCB that validates access to every resource. Stands between every subject and object, verifying requests
  • Security kernel - Collection of components in TCB that work together to implement reference monitor functions
  • State machine model - Describes a system that is always secure no matter what state it’s in
  • State - A snapshot of a system at a specific moment in time
  • Information flow model - Focuses on the flow of information based on a state machine model
    • Discussed later in this chapter
    • Bell-LaPadula - Concerned with information flow from high security level to a low security level
    • Biba - Concerned with preventing information flow from a low security level to a high security level
  • Noninterference - Concerned with how the actions of a subject at a higher security level affect the system state or the actions of a subject at a lower security level
    • Isolation
  • Take-Grant model - Employs a directed graph to dictate how rights are passed from one subject to another, or from a subject to an object
  • Access Control Matrix - A table of subjects and objects that indicates the actions or functions each subject can perform on each object
    • Each row has a capabilities list - Access Control List (ACL). Tied to the subject and lists valid actions that can be taken on each object
  • Lattice-Based Access Control
    • Simple property - Concerned with reading data
    • Star property - Concerned with writing data

Bell-LaPadula Model

  • US Department of Defense developed in 1970s
  • Concerned with protecting classified information
  • A subject with any level of clearance can access resources at or below its clearance level
  • Prevents leaking or transfer of classified information to less secure clearance levels
  • Does not address integrity or availability
  • No read up (simple property)
  • No write down (star property)
  • Uses an access control matrix (discretionary property - need to know)

Biba Model

  • Integrity is more important than confidentiality
  • Also built on state machine concept, based on information flow, and is multilevel
  • Very similar to Bell-LaPadula, except inverted
  • No read down (simple property)
  • No write up (star property)

Clark-Wilson Model

  • Does not use lattice structure, but rather uses access triple control
    • Subject
    • Program/transaction
    • Object
  • Objects are accessed only through programs
  • Constrained data item - CDI. Any data item whose integrity is protected by the security model
  • Restricted interface model - Elements of an interface are limited based on a subjects rights and authorization

Brewer and Nash Model (aka Chinese Wall)

  • Changes dynamically based on a user’s previous activity
  • Chinese Wall model - Creates a class of data that defines which security domains are potentially in conflict and prevents subjects with access to one domain that belongs to a specific conflict class from access any other information in the conflict class
  • Protects from conflicts of interest

Goguen-Meseguer Model

  • Integrity model, less well known as Biba
  • Foundation of noninterference conceptual theories

Sutherland Model

  • Integrity model, focused on preventing interference

Graham-Denning Model

  • Focused on the secure creation and deletion of both subjects and objects
  • A collection of eight protection rules or actions that define boundaries (Only one with all eight)
    • Create an object
    • Create a subject
    • Delete an object
    • Delete a subject
    • Provide read access right
    • Provide grant access right
    • Provide delete access right
    • Provide transfer access right

Select Security Controls Based On Systems Security Requirements

  • Need a tested and technical evaluation
  • Need a formal comparison of its design and security criterial
  • Sometimes you may need a trusted third party to provide a seal of approval
  • Trusted Computer System Evaluation Criteria - TCSEC. Standards to attempt to specify minimum security criteria for various categories of use
    • Classes and required functionality
      • D - Minimal protection
      • C1 - Discretionary protection
      • C2 - Controlled access protection
      • B1 - Labeled security
      • B2 - Structured protection
      • B3 - Security domains
      • A1 - Verified protection
  • Common Criteria - Represents a global effort that is similar to TCSEC in function/purpose
    • Assurance levels
    • EAL1 - Functionally tested
    • EAL2 - Structurally tested
    • EAL3 - Methodically tested and checked
    • EAL4 - Methodically designed, tested, and reviewed
    • EAL5 - Semi-formally designed and tested
    • EAL6 - Semi-formally verified, designed, and tested
    • EAL7 - Formally verified, designed, and tested
  • Industry and International Security Implementation Guidelines
    • Payment Card Industry Data Security Standard (PCI DSS)
    • International Organization for Standardization (ISO)
  • Certification - Comprehensive evaluation of security features of an IT system
  • Accreditation - A formal declaration by the designated approval authority that an IT system is approved to operate in a particular security mode
    • Often an iterative process

Understand Security Capabilities of Information Systems

  • Virtualization - Used to host one or more operating systems within the memory of a single host computer
  • Trusted Platform Module - TPM. Specification for a cryptoprocessor chip on a mainboard, and the general name for implementation of the spec
    • Stores and processes cryptographic keys
  • Hardware security module - HSM. A cryptoprocessor used to manage/store digital encryption keys, accelerate crypto operations, support faster digital signatures, improve authentication
  • Interfaces - Implemented with an application to restrict what users can do or see based on their privileges
  • Fault tolerance - The ability of a system to suffer a fault but continue to operate
Read More

CISSP Study Notes Chapter 7 - PKI and Cryptographic Applications

Chapter 7 is all about applying cryptography. It covers the cryptographic lifecycle, methods, Public Key Infrastructure, and key management practices. It also covers Digital signatures, nonrepudiation, integrity, cryptanalytic attacks, and Digital Rights Management.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 7: PKI and Cryptographic Applications

My key takeaways and crucial points

Public and Private Keys

  • Every user maintains a public key and a private key in asymmetric crypto
  • Private key is preserved for the sole use of the individual who owns the key
  • Public key is available to anyone they want to communicate with
  • RSA
    • Choose two large prime numbers approximately 200 digits each labeled p and q
    • Multiply them together n = p * q
    • Select a number e that is:
      • Less than n
      • e and (p - 1)(q - 1) are relatively prime (no common factors)
    • Find d that is (ed - 1) mod (p - 1)(q - 1) = 1
    • Distribute e and n as the public key and keep d as the secret private key
  • Key length is the most important parameter
    • RSA key length: 1024 bits
    • DSA key length: 1024 bits
    • Elliptical curve key length: 160 bits
  • El Gamal
    • Major disadvantage - this algo doubles the length of any message it encrypts
  • Elliptical Curve
    • Better for low power devices like phones
    • 1024 bit RSA key is cryptographically equivalent to 160 bit elliptical curve

Hash Functions

  • Hash functions - Take a potentially long message and generate a unique output value derived from the content of the message
    • Message digest - The output of a hash function
  • Basic hash function requirements
    • Input can be any length
    • Output has a fixed length
    • Should be relatively easy to compute for any input
    • Hash function is one-way which means it is extremely hard to determine the input when provided the output
    • Collision free - hard to find two inputs that produce the same message digest
  • SHA - Secure Hash Algorithm
    • SHA-1 produces 160 bit digest
    • Processes 512 bit blocks
    • Pads messages to fit
    • SHA-256 produces 256 bit messages using 512 bit block size
    • SHA-224 uses truncated version of SHA-256 hash to make a 224 bit message with 512 bit block size
    • SHA-512 produces 512 bit message digests using a 1024 bit block size
    • SHA-384 uses truncated SHA-512 to produce 384 bit digest with 1024 bit block size
  • MD5
    • 512 bit blocks
    • 4 distinct rounds of computation
    • Message length must be 64 bits less than a multiple of 512 bits
      • Uses padding to make up the difference
  • Hash of Variable Length (HAVAL) - MD5 variant
    • Hash value length: 128, 160, 192, 224, 256 bits

Digital Signatures

  • Enforces nonrepudiation
  • Assures the recipient that the message was not altered while in transit
  • If Alice wants to sign a message she’s sending to Bob…
    • Alice generates a message digest of the original plaintext using a hashing algo
    • Alice encrypts the message digest using her private key - this is the digital signature
    • Alice appends the signed message digest to the plaintext
    • Alice transmits the appended message to Bob
    • Bob decrypts the digital signature using Alice’s public key
    • Bob uses the same hashing function to create a message digest of the full plaintext received from Alice
    • Bob compares the decrypted message digest he got from Alice with the message digest he computed himself
      • If the hashes match, the signature is verified and the message was sent from Alice
      • If the hashes do not match, the signature is invalid and either the message didn’t come from Alice, or it was modified in transit
  • Does not provide privacy/confidentiality
  • Does provide integrity, authentication, nonrepudiation
  • HMAC - Hashed Message Authentication Code
    • Provides integrity, does not provide nonrepudiation
    • Depends on a shared secret key

Which Key To Use When

If you want to Use this key
Encrypt a message Recipient’s public key
Decrypt a message sent to you Your private key
Digitally sign a message you are sending to someone else Your private key
Verify the signature on a message sent by someone else Sender’s public key

Digital Signature Standard

  • Federal Information processing Standard (FIPS) 186-4

Public Key Infrastructure

  • Certificates
    • Endorsed copies of a public key
    • International standard: X.509
    • Contains
      • Version of X.509 it conforms to
      • Serial number
      • Signature algo identifier
      • Issuer name
      • Validity period
      • Subject’s name
      • Subject’s public key
    • Certificate extensions are custom variables inserted into a certificate
  • Certificate Authorities
    • Neutral organizations that offer notarization services for digital certificates
    • Registration authorities assist by verifying user’s identities prior to issuing certificates but do not issue certs themselves
    • Certificate Path Validation - CPV. Each certificate in the certificate path from the original start or root of trust down to the server or client in question is valid and legitimate.
  • Certificate Generation and Destruction
    • Enrollment
      • Prove your identity to the CA
      • Provide them your public key
      • CA signs your certificate with their private key
    • Verification
      • Verify the cert by checking the CA’s digital signature using the CA’s public key
      • Make sure it was not revoked (Certificate Revocation List - CRL)
        • Or Online Certificate Status Protocol (OCSP)
      • A certificate is valid if:
        • The digital signature of the CA is authentic
        • You trust the CA
        • The certificate is not listed on a CRL
        • The certificate contains the data you are trusting
    • Revocation
      • Revoking a certificate declares it invalid before it’s natural expiry
      • Certificate Revocation Lists (CRLs) contain serial numbers of certs that a CA revoked along with when they were revoked
      • Online Certificate Status Protocol (OCSP) eliminates latency with CRLs by providing a real-time check

Asymmetric Key Management

  • Choose an encryption system whose algo is in the public domain
  • Use a key length that balances security requirements with performance
  • Keep private key secret
  • Retire keys when they’re done being useful
  • Back up your key
  • Hardware security modules - HSMs. Store and manage encryption keys in a secure manner
    • Yubikey is an example

Applied Cryptography

  • Portable devices
    • Windows includes BitLocker and Encrypting File System (EFS)
    • Mac has FileVault
    • Linux has VeraCrypt
    • All for disk encryption of mobile devices like laptops
  • Trusted Platform Module - TPM. A chip on the motherboard that stores and manages keys used for full disk encryption.
  • Email
    • For confidentiality, encrypt the message
    • For integrity, hash the message
    • For authentication, integrity, and/or nonrepudiation, digitally sign the message
    • For confidentiality, integrity, authentication, and nonrepudiation, encrypt and sign the message
    • Always the responsibility of the sender
    • Pretty Good Privacy - PGP
      • “web of trust”
      • Secure email system
    • Secure/Multipurpose Internet Mail Extensions - S/MIME
      • Uses X.509 certificates for exchanging crypto keys
  • Web Applications
    • SSL and TLS (Secure Sockets Layer, and Transport Layer Security)
    • HTTPS (Hypertext Transfer Protocol Secure) uses port 443 to negotiate encrypted communications between web servers and clients
    • Depends on the exchange of server digital certificates
      • When a user accesses a website, the browser gets the web server’s cert and extracts the public key from it
      • The browser creates a random symmetric key, uses the server’s public key to encrypt it, and sends the encrypted symmetric key to the server
      • The server decrypts the symmetric key using it’s private key, and the two systems exchange future messages using symmetric encryption/key
    • Padding Oracle On Downgraded Legacy Encryption (POODLE) is a downgrade attack - forcing a system to use an older/vulnerable version of TLS or SSL instead of an up to date version
  • Steganography and Watermarking
    • Steganography - Using crypto techniques to embed secret messages within another message
    • Ex: Adding digital watermarks to documents to protect intellectual property
  • Digital Rights Management - DRM
    • Using encryption to enforce copyright restrictions on digital media
    • High-Bandwidth Digital Content Protection - HDCP. Provides protection over digital connections like HDMI
    • Advanced Access Content System - AACS. Protects Blu-Ray
    • Video games increasingly depend on having an internet connection
    • Document DRM may want to control permissions like who can read, modify, remove watermarks, download/save, print, take a screenshot

Networking

  • IPSec and Internet Security Association and Key Management Protocol (ISAKMP)
  • Circuit Encryption
    • Link encryption - Protects entire communication circuits by creating a secure tunnel
    • End-to-end encryption - Protects communications between two parties
    • The difference between the above is that link encryption, all data including headers, trailers, address, routing data is also encrypted
    • When encryption happens at higher OSI layers, it’s usually end-to-end
    • Secure Shell - SSH. End-to-end encryption
  • IPSec
    • Could be any two entities - servers, routers, gateway, a combo
    • Uses public key crypto to provide encryption, access control, nonrepudiation, message authentication
    • VPNs use IPSec
    • Authentication Header - AH. provides message integrity and nonrepudiation
    • Encapsulating Security Payload - ESP. Provides confidentiality and integrity of packet contents
    • Transport mode - only the packet payload is encrypted
    • Tunnel mode - entire packet, including header, is encrypted
    • Set up a session with a security association (SA)
      • Represents the communication session and records configuration and status
      • Need two SAs, one for each direction
      • If using both AH and ESP bi-directional, you need four SAs
  • ISAKMP
    • Used by IPSec for negotiating, establishing, modifying, deleting SAs
  • Wireless networking
    • Wired Equivalent Privacy - WEP. 64 bit and 128 bit encryption for IEEE 802.11
      • A lot of flaws exist here, not considered secure - should never be used
    • WiFi Protected Access - WPA. Improves WEP by adding Temporal Key Integrity Protocol (TKIP)
      • WPA2 adds AES crypto
      • Does not provide end-to-end encryption
    • 802.11x - authentication and key management framework for both wired and wireless networks
      • Client runs software called a supplicant

Cryptographic Attacks

  • Analytic Attack - Using math to reduce complexity of algo
  • Implementation Attack - Exploits weaknesses in the implementation of a crypto system
  • Statistical Attack - Using math to find patterns, floating-point errors, inability to find truly random numbers
  • Brute Force - Trying every possible combination for key and password
    • Key length is critical for defending brute force
    • Rainbow tables - precomputed values for crypto hashes, commonly for cracking passwords
  • Salt - A random value added to the end of the password before it is hashed
  • Frequency analysis - Counting the number of times each letter appears in the ciphertext
  • Known Plaintext - Attacker has a copy of the encrypted message, along with plaintext, can be used to determine the key
  • Chosen Ciphertext - Attacker has ability to decrypt portions of the ciphertext
  • Chosen Plaintext - Attacker has ability to encrypt plaintext messages of their choosing
  • Meet in the Middle - Known plaintext message
  • Man in the Middle - A malicious individual sits between two communicating parties and intercepts communications
  • Birthday - AKA collision attack, finding flaws in hashing functions where two inputs generate the same output
  • Replay - A system lacks temporal protections, a message can be sent more than once at different times
Read More

CISSP Study Notes Chapter 6 - Cryptography and Symmetric Key Algorithms

Chapter 6 covers data security controls, understanding data states, and then it gets into cryptography. This chapter goes into assessing and mitigating vulnerabilities of systems related to cryptography, cryptographic lifecycle and methods, nonrepudiation, and data integrity.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 6: Cryptography and Symmetric Key Algorithms

My key takeaways and crucial points

Historical Milestones in Crypto

  • Caesar cipher - Shift each letter of the alphabet 4 places to the right. A becomes D, B becomes E, etc..
    • Cracked via frequency analysis - Most common English letters are E, T, A, O, N, R, I, S, H. Attackers find the common substitutions and experiment until they can discern the message.

Crypto Basics

  • Goals of Crypto
    • Confidentiality
      • Preservation of secrecy of stored information
      • Symmetric crypto systems use a secret key shared by all users
      • Asymmetric crypto systems use individual combos of public and private keys for each user
      • Data in transit aka data on the wire - exam terms, same thing
    • Integrity
      • Message digests - aka digital signatures, created when message is transmitted, used to ensure data wasn’t altered
    • Authentication
      • Challenge-response authentication
      • Prove that one can encrypt/decrypt something to validate identity
    • Nonrepudiation
      • Only offered by asymmetric systems
      • Assurance that the message originated by the sender, not someone pretending to be the sender
  • Cryptography Concepts
    • Plaintext - Before a message is coded
    • Ciphertext - After a message is encrypted
    • Keys - Used in encryption calculations.
    • Key space - The range of values that are valid for use as a key in a specific algorithm, aka bit size
    • Kerckhoff’s Principle - A Cryptographic system should be secure even if everything about the system, except the key, is public knowledge
    • Cryptovariable - Sometimes used to refer to keys
    • Cryptography - Implementing secret codes and ciphers
    • Cryptanalysis - The study of methods to defeat codes and ciphers
    • Cryptology - Cryptography and cryptanalysis together

Cryptographic Mathematics

  • Boolean Math - Logical Operations
    • AND - Represented by the ^ symbol, checks whether two input values are both true
    • OR - Represented by the v symbol, checks whether at least one input value is true
    • NOT - Represented by the ! symbol, reverses the value of an input variable
    • XOR - Represented by the symbol, returns true when only one of the input values is true
  • Modulo function - showing a remainder value each time you performed a division operation, very important to crypto operations
  • Nonce - A random number that acts as a placeholder variable in mathematical functions
  • Zero-Knowledge Proof - Prove your knowledge of a fact to a third party without revealing the fact itself to the third party
  • Split knowledge - Knowledge is divided among multiple users
    • M of N control - Requires a minimum (M) number of the total agents (N) work together to perform a high-security action
  • Work function - Measures the strength of a crypto system
    • Time and effort required to complete a brute-force attack
    • Work function only needs to be slightly greater than the time value of the asset

Codes vs. Ciphers

  • Codes - Crypto systems of symbols that represent words or phrases, sometimes secret, not always meant to provide confidentiality
    • Ex: “The eagle has landed”
  • Ciphers - Always meant to hide the true meaning of a message
    • Ex: The transformation of plaintext to ciphertext

Transposition Ciphers

  • Transposition cipher - Use an encryption algorithm to rearrange the letters of a plaintext message to form ciphertext
  • Can used a keyword to perform a columnar transposition

Substitution Ciphers

  • Substitution cipher - Use an encryption algorithm to replace each character or bit of the plaintext with a different character
  • Vigenère cipher - Uses a chart of the alphabet shifted once per line, then a key is used to decrypt

One-Time Pads

  • One-time pads - Use a different substitution alphabet for each letter of the plaintext message
  • AKA Vernam ciphers
  • One-time pad must be randomly generated
  • Most be physically protected against disclosure
  • OTP must be used only once
  • Key must be at least as long as the plaintext
  • When used properly, OTP is unbreakable - no repeating patterns

Running Key Ciphers

  • Running key cipher - aka a book cipher
  • Key is as long as the message itself, often chosen from a common book

Block Ciphers

  • Block ciphers - Operate on chunks or blocks of a message

Stream Ciphers

  • Stream ciphers - Operate on one character or bit of a message (or data stream) at a time

Confusion and Diffusion

  • Confusion - Occurs when the relationship between plaintext and the key is too complicated for an attacker to altering the plaintext and analyzing the ciphertext to determine the key
  • Diffusion - Occurs when a change in the plaintext results in multiple changes throughout the ciphertext
  • Substitution introduces confusing
  • Transposition introduces diffusion

Modern Crypto

  • Crypto keys
    • Kerckhoff’s Principle - Opening algorithms to public scrutiny actually improves their security
    • Modern cryptosystems rely on secrecy of one or more cryptographic keys
  • Symmetric Key Algorithms
    • AKA secret key and private key crypto
    • Rely on a shared secret
    • Weaknesses
      • Need to distribute the key - See Diffie-Hellman
      • Does not implement nonrepudiation
      • Algo doesn’t scale well - secure private comms between individuals can only be achieved if every possible combo of users has their own shared key
        • (n * (n - 1)) / 2 = number of keys needed
      • Keys need to be regenerated every time group membership changes
    • Very fast
  • Asymmetric Key Algorithms
    • AKA public key algos
    • Each user has two keys
      • Public key - Shared with all users
      • Private key - Kept secret, known only to the user
    • Supports digital signing
    • Transience - new users requires only one new public-private key pair, users can be easily removed
    • Only need to regenerate keys when a user’s private key is compromised
    • Can provide integrity, authentication, nonrepudiation
    • Simple key distribution
    • No pre-existing communication link needs to exist
    • Big disadvantage is slow speed of operation
    • Lots of applications use a asymmetric crypto to establish a connection, and exchange a symmetric secret, then the rest of the session is encrypted with symmetric crypto

Symmetric Cryptography

  • Data Encryption Standard
    • No longer considered secure
    • Superseded by Advanced Encryption Standard (AES)
    • Uses a long series of XOR operations, repeated 16 times (aka 16 rounds of encryption)
    • 56 bit key size
    • Electronic Code Book Mode - ECB
      • Weakest mode
      • 64 bit blocks processed
      • The same block of input produces the same encrypted block
      • Only for exchanging small amounts of data
    • Cipher Block Chaining Mode - CBC
      • Each block of plaintext is XORed with the preceding block of ciphertext before it’s encrypted with DES
      • CBC has an initialization vector and XORs it with the first block of the message - IV must be sent to recipient
      • Errors propagate
    • Cipher Feedback Mode - CFB
      • Streaming cipher version of CBC
      • Operates against data produced in real time
      • Uses a memory buffer instead of block size
      • Uses an IV and chaining
    • Output Feedback Mode - OFB
      • Almost the same as CFB, except instead of XORing an encrypted version of the previous ciphertext, it’s XORed with a seed value
      • Still uses an IV to create the seed value
      • Future seeds are derived by running DES on previous seed
      • No chaining function, so transmission errors don’t propagate
    • Counter Mode - CTR
      • Stream cipher similar to CFB and OFB
      • Uses a simple counter that increments for each operation
      • Errors do not propagate
      • Well suited for use in parallel computing
  • Triple DES
    • Effective key = 64 bit
    • 64 bit block size
    • Four versions
      • Encrypt plaintext 3 times using 3 different keys
        • Effective key size of 112 bits
      • Uses three keys, but replaces the second encryption with a decryption operation
      • Only uses two keys
      • Uses two keys, and uses a decryption operation in the middle
  • International Data Encryption Algorithm
    • Key is broken up in a series of operations into 52 16-bit subkeys
    • Same 5 modes as DES
    • Used in PGP - Pretty Good Privacy
  • Blowfish
    • 64-bit blocks of text
    • Keys between 32 bits and 448 bits
    • Much faster than DES and IDEA
    • Released for public use with no license
  • Skipjack
    • US Government
    • 64-bit blocks of text
    • 80-bit keys
    • Clipper and Capstone encryption chips - just know these are related to Skipjack
    • Supports escrow of encryption keys
    • Not widely embraced because of mistrust of escrow processes within US government
  • RV5 - Rivest Cipher 5
    • Block sizes - 32, 64, 128 bits
    • Key sizes - between 0 and 2040 bits
  • Advanced Encryption Standard - AES
    • 128-bit keys require 10 rounds of encryption
    • 192-bit keys require 12 rounds of encryption
    • 256-bit keys require 14 rounds of encryption
  • Twofish
    • Prewhitening - XORing the plaintext with a separate subkey before the first round of encryption
    • Postwhitening - Uses a similar operation after the 16th round of encryption

There is a table (6.2) that I won’t reproduce here that you should familiarize yourself with before your exam

Symmetric Key Management

  • Creation and distribution
    • Offline - physical distribution
    • Use public key encryption to setup an initial communications link, exchange a secret key over the secure public key link
    • Diffie-Hellman
  • Storage and destruction of keys
    • Never store keys on the same system as encrypted data
    • Sensitive keys can be split up and then split knowledge can be used
  • Key escrow and recovery
    • Fair cryptosystems - Secret keys used in communication are divided into two or more pieces, each of which is given to an independent third party
    • Escrowed encryption standard - Provides the government with a technological means to decrypt ciphertext - used in Skipjack

Cryptographic Lifecycle

  • Moore’s Law - except One-Time Pad, all crypto systems have a limited life span
    • A cited trend in the advancement of computing power that states the processing abilities of a microprocessor will double approximately every two years
    • Not a law, just a previous trend
    • Means what is hard to crack today might not be hard to crack tomorrow - especially with quantum computing
Read More

CISSP Study Notes Chapter 5 - Protecting Security of Assets

Chapter 5 is concerned with asset security. It discusses identifying and classifying information and assets, as well as determining how to maintain assets and identify asset owners. Chapter 5 also talks about protecting privacy, ensuring proper asset retention, determining data security controls, and establishing information and asset handling requirements.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 5: Protecting Security of Assets

My key takeaways and crucial points

Defining Sensitive Data

  • Sensitive data is any data that isn’t public or unclassified.
  • Personally identifying information - PII. Can be used to identify an individual.
    • See NIST 800-122
    • Can distinguish or trace an individual’s identity - ex: name, social security number, date and place of birth
    • Linked or linkable to an individual - ex: medical, financial, employment info
  • Protected health information - PHI. Any health-related info that can be related to a specific person
    • HIPAA
    • Created or received by a healthcare provider
    • Relates to past, present or future physical or mental health or condition of an individual
    • HIPAA defines PHI more broadly
  • Proprietary Data
    • Helps an organization maintain a competitive edge

Defining Data Classification

  • Data classification - Identifying the value of the data to the organization.
    • Critical to protect data confidentiality and integrity
    • Classification labels
    • Identifies how data owners can determine proper classification
  • Government uses top secret, secret, confidential, and unclassified
    • Top secret = “exceptionally grave damage”
    • Secret = “serious damage”
    • Confidential = “damage”
  • Classifications also apply to physical/hardware assets
  • Non-government
    • Use more meaningful labels - ex: confidential, proprietary, private, sensitive, public
    • More discretion
  • Both systems identify relative value of data
    • Top secret is highest for governments
    • Confidential is highest for organizations

Data Classifications

  • Confidential or proprietary - highest level of classified data
  • Private - Should stay private within organization
  • Sensitive - Similar to confidential, breach would cause damage to the mission of the organization
  • Public - Unclassified data
  • For the exam, remember sensitive information refers to anything that isn’t public/unclassified

Defining Asset Classifications

  • If a computer is processing top secret data, the computer is a top secret asset
    • Same with media
  • Asset classification should match the data classifications

Determining Data Security Controls

  • Identity and Access Management - IAM. See Chapter 13 and 14, but has to do with ensuring only authorized personnel can access resources.

Understanding Data States

  • Data at rest - Stored on media.
  • Data in transit - Data that is being transmitted over a network.
  • Data in use - Data in memory or temporary storage while an application is using it.
  • Protect confidentiality with strong encryption in all states

Marking Sensitive Data and Assets

  • Most important information that a mark or label provides is the classification of data
  • When users know the value of the data, they’re more likely to protect it properly based on the classification
  • Headers, footers, watermarks - DLP (data loss prevention) systems can identify documents that include sensitive information, apply appropriate controls
  • Use labels for unclassified media to prevent errors of omission where sensitive data isn’t marked
  • If media or systems need to be downgraded to a less sensitive classification, it needs to be sanitized

Handling Sensitive Information and Assets

  • Backup tapes should be protected with the same level of protection as the data that is backed up

Storing Sensitive Data

  • Must store sensitive data such that it is protected against any type of loss
  • Most obvious protection is encryption
  • Do not neglect physical security practices
    • Theft, device failure
  • The value of any sensitive data is much greater than the value of the media holding the sensitive data
    • It’s more cost effective to purchase high quality backup media than to lose sensitive data

Destroying Sensitive Data

  • NIST 800-88r1
  • Sanitization - clearing, purging, and destroying to ensure data cannot be recovered by any means
    • When a computer is disposed of, sanitization means all nonvolatile memory is removed/destroyed

Eliminating Data Remanence

  • Data remanence - Data that remains on media after the data was supposedly erased.
  • Degausser - Generates heavy magnetic field to destroy data.
    • Cannot degauss SSDs (solid state drives)
    • Best method of sanitizing SSDs is destruction
  • Erasing - Performing a delete operation against a file.
    • Only removes the directory or catalog link to data.
    • The data remains on the drive.
  • Clearing - AKA overwriting.
    • Prepares media for re-use.
    • Unclassified data is written over all addressable locations on the media.
    • Maybe writing a single character or bit pattern over all media.
    • It is still possible to retrieve some of the original data using lab or forensics.
  • Purging - More intense form of clearing.
    • Prepares media for reuse in less secure environments.
    • The only method for reuse in less secure environments.
    • Repeats the clearing process multiple times and may combine with other methods like degaussing to completely remove data.
  • Destruction - Most secure method of sanitization.
    • Data cannot be extracted from destroyed media.
  • Declassification - Any process that purges media or a system to prepare it for use in an unclassified environment.
    • Sanitization methods are often more effort than the cost for new media.

Ensuring Appropriate Asset Retention

  • Record retention and media retention is the most important part of asset retention.
  • Record retention - Maintaining important information as long as it is needed, destroying when it isn’t.
  • Companies cannot legal delete potential evidence after a lawsuit is filed.

Protecting Data with Symmetric Encryption

  • Symmetric encryption uses the same key to encrypt and decrypt data
  • Advanced encryption standard - AES.
    • Supports key sizes of 128 bits, 192 bits, and 256 bits.
  • Triple DES - First implementation uses 56 bit keys.
    • New implementation uses 112 bit, 168 bit keys.
    • Longer keys is more secure.
  • Blowfish - Key sizes between 32 bits and 445 bits.
    • Bcrypt adds 128 additional bits as a salt to protect against rainbow table attacks.

Protecting Data with Transport Encryption

  • HTTP (Hypertext Transfer Protocol) operates in cleartext
  • HTTPS (Hypertext Transfer Protocol Secure) adds encryption
    • Transport Layer Security (TLS) is current standard
    • Secure Sockets Layer (SSL) was precursor
  • Virtual Private Networks (VPNs) create a logical tunnel between private networks
    • Allow employees in remote areas to access organizations internal network
    • Internet Protocol Security (IPSec) - VPN encryption
    • Layer 2 Tunneling Protocol (L2TP) - adds authentication header which provides authentication and integrity, and encapsulating security payload for confidentiality
    • IPSec and Secure Shell (SSH) is also commonly used

Data Owners

  • Data owner - The person who has ultimate organizational responsibility for data
    • Establishes rules for appropriate use and protection
    • Input regarding security requirements and controls
    • Decides who has access to information and with what privileges
    • Assists in the identification and assessment of security controls
  • NIST 800-18
    • Uses phrase “rules of behavior” which means Acceptable Use Policy

Asset Owners

  • Also uses NIST 800-18
  • Asset owner - Person who owns the asset or system that processes sensitive data
    • Develops a system security plan
    • Ensures system is deployed and operated according to security requirements
    • Delivers appropriate security training
    • Usually the same person as the data owner

Business/Mission Owners

  • NIST 800-18 again
  • Business/Mission owner - Program manager or information system owner
    • Might own processes that use systems managed by other entities

Data Processors

  • Data processor - Any system used to process data
  • GDPR defines as a “natural or legal person, public authority, agency, or other body, which processes personal data solely on behalf of the data controller.
    • Data controller controls the processing of data
  • EU-US Privacy Shield
  • Swiss-US Privacy Shield
  • Privacy Shields are deep, but include
    • Notice - Organization must inform individuals about the purpose of data it collects
    • Choice - Offer a chance to opt out of data collection
    • Accountability for onward transfer - Can only transfer data to other organizations that comply with notice and choice principles
    • Security - Take reasonable precautions to protect personal data
    • Data integrity and purpose limits - Should only collect data that is needed for purposes in the notice
    • Access - Individuals must have access to information an organization holds about them, and to correct, amend or delete it
    • Recourse, enforcement and liability - Must implement mechanisms to ensure compliance and handle complaints
    • These are also general GDPR principles

Pseudonymization

  • Pseudonym - An alias.
  • Pseudonymization - Replacing data with artificial identifiers.
  • Tokenization - Replacing data with tokens.
  • Both methods can be reversed.

Anonymization

  • Anonymization - The process of removing all relevant data so that it is impossible to identify the original subject or person.

More Terms

  • Data administrator - Responsible for granting appropriate access to personnel.
  • Data custodian - Data owners delegate day-to-day tasks to a custodian.
  • User - Any person who accesses data to accomplish work tasks.

Protecting Privacy

  • HIPAA (USA)
  • Personal Information Protection and Electronic Documents (Canada)
  • GDPR (EU)
  • Collection limitation principle - The collection of data should be limited to only what is needed.

Using Security Baselines

  • Baselines provide a starting point and ensure minimum security standards
  • NIST 800-53 - Security Control Baselines are discussed
  • Scoping - Reviewing a list of baseline security controls and selecting the ones that apply to the interesting IT system (the one you’re trying to protect)
  • Tailoring - Modifying the list of security controls within a baseline so they align with the mission of the organization
  • PCI DSS (Payment Card Industry Data Security Standard)
    • For when business process major credit cards
  • GDPR
    • Related to EU countries
Read More

CISSP Study Notes Chapter 4 - Laws, Regulations, and Compliance

Chapter 4 covers a variety of topics related to determining compliance requirements, contractual and legal standards, and privacy requirements. It also includes information to help you understand legal and regulatory issues that relate to information security in a global context like data breaches, licensing requirements, and privacy.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 4: Laws, Regulations, and Compliance

My key takeaways and crucial points

Categories of Laws

  • Criminal law - Police and other law enforcement agencies are concerned with
    • Penalties include community service, fines, deprivation of civil liberties (prison time)
  • Civil law - Contract disputes, employment matters, things that need impartial arbitration
    • Law enforcement doesn’t normally become involved
    • Criminal prosecution, the government brings action against a person accused of a crime
  • Administrative Law - Government charges agencies with purpose/function
    • Agencies abide by and enforce criminal and civil laws enacted by legislative branch
    • Published in the Code of Federal Regulations (CFR)

Laws

  • General Data Protection Regulation (GDPR) - European Union
  • Computer Fraud and Abuse Act (CFAA)
    • Based off of Comprehensive Crime Control Act of 1984 (CCCA)
    • Makes it a crime to (without authorization):
      • Access classified info or financial info in a federal system
      • Access a computer exclusively used by the federal government
      • Use a federal computer to perpetrate fraud
      • Cause malicious damage to a federal computer system in excess of $1000
      • Modify medical records on a computer when it may impair medical care
      • Traffic in computer passwords if it affects interstate commerce or involves a federal computer system
    • Federal computers was later extended to cover all “federal interest”
      • Computers used by US government
      • Computers used by financial institutions
  • CFAA Amendments
    • Also covers interstate commerce in addition to federal interest
  • Federal sentencing guidelines
    • Released in 1991
    • Prudent man rule - Senior executives must take personal responsibility for ensuring due care such that ordinary, prudent individuals would exercise in the same situation
    • Burdens of proof for negligence
      • Must have neglected a legally recognized obligation
      • Failure to comply with recognized standards
      • A relationship between the act of negligence and subsequent damages
  • National Information Infrastructure Protection Act of 1996
    • Broadens CFAA to systems used in international commerce
    • Extends to include national infrastructure
    • Treats intentional and reckless acts that cause damage as a felony
  • Federal Information Security Act - FISMA
    • Pertains to federal agencies
    • Includes activities of contractors
    • National Institute of Standards and Technology (NIST)
      • Responsible for developing FISMA implementation guidelines
      • Periodic assessments of risk
      • Policies and procedures
      • Plans for providing information security for networks, facilities, systems
      • Security awareness training
      • Testing and evaluation
      • Planning, implementing, evaluating, and documenting remedial actions
      • Detecting, reporting, and responding to security incidents
      • Ensure continuity of operations
  • Federal Cybersecurity Laws of 2014
    • Centralizes federal cyber security responsibility to Homeland Security
    • Two exceptions
      • Defense related cyber security stays with Sec of Defense
      • Intelligence related issues stay with Director of National Intelligence
    • NIST charged with responsibility for coordinating nationwide work on voluntary standards
    • NIST SP 800-53 - Required for use in federal computing systems
    • NIST SP 800-171 - Often a contractual obligation for federal contractors

Intellectual Property (IP)

  • Intangible assets are collectively referred to as intellectual property
  • Copyright and the Digital Millennium Copyright Act
    • Copyright law - Guarantees the creators of original works of authorship are protected from unauthorized duplication of their work
    • Works covered:
      • Literary
      • Musical
      • Dramatic
      • Pantomimes and choreographic
      • Pictorial, graphical, and sculptural
      • Motion pictures and other audiovisual
      • Sound recordings
      • Architectural
    • Computer software is covered under the scope of literary work - the actual source code
    • Registering a copyright is not needed for copyright enforcement
      • Creators of work have an automatic copyright
    • Work for hire - when an employer pays for work in the normal course of a work day, the employer owns the copyright
    • Copyrights last until 70 years after the death of the last surviving author
    • Anonymous works are protected for 95 years from first publication or 120 years from date of creation - whichever is shorter
    • DMCA prohibits attempts to circumvent copyright protection
      • Also limits service provider liability
      • Not accountable for “transitory activities”
        • Transmission must be initiated by someone who is not a provider
      • Exempts activities of service providers related to system caching, but providers must take prompt action to remove copyright materials upon notification of infringement
      • One may make backup copies
  • Trademarks
    • Words, slogans, logos, etc. used to identify a company and its products
    • Meant to avoid confusion in the marketplace
    • Automatically protected and can use the ™ symbol for public activities
    • For official recognition, apply to US Patent and Trademark Office
      • Registration certificate allows you to use the ® symbol
    • Must not be confusingly similar to another trademark
    • Should not be descriptive of the goods and services you’ll offer
      • “Thomas’ Software Company” is not a good trademark and may be rejected because it describes services
    • Trademarks are granted for 10 years and can be renewed for unlimited 10 year periods
  • Patents
    • Protect IP rights of inventors for 20 years
    • Inventions must…
      • Be new
      • Be useful
      • Not be obvious - ex: using a drinking cup to collect rainwater is obvious, but a specially designed cup that limits evaporation might not be
    • Patent trolls register patents, manipulate the system and rules to sue or otherwise make monetary gain from patents, often without creating the invention actually patented
  • Trade Secrets
    • IP that is critical to a business
    • Significant damage would result if disclosed competitors or public
    • Filing copyright or patent requires you publicly disclose the details, no longer secret
      • Are also time-limited
    • Trade secrets are things you want to keep to yourself
    • Enforced by nondisclosure agreements (NDA)
    • Patent law does not provide adequate protection for software products, so it’s often treated as a trade secret
  • Licensing
    • Contractual license - written contract
    • Shrink-wrap license - a clause stating you acknowledge agreement simply by “breaking the seal on the shrink wrap”
    • Click-through license - increasingly common, required to click a button, adds active consent
    • Cloud services license - click-through agreements to the extreme

Import/Export

  • International Traffic in Arms Regulations (ITAR) - controls export of items designated as military and defense items
  • Export Administration Regulations (EAR) - covers items designed for commercial use by t may have military applications
  • Computer Export Controls
    • US firms can export computers almost anywhere without permission, with exceptions
    • Exceptions from Department of Commerce’s Bureau of Industry and Security
    • Exceptions based on nuclear threat
    • Exceptions that need permission are currently Cuba, Iran, North Korea, Sudan, and Syria

Privacy

  • US Privacy Law
    • No constitutional guarantee of privacy - many federal laws exist
    • Fourth Amendment - prohibits government agents from searching private property without a warrant and probably cause
      • Also includes protection against wiretapping and other privacy invasions
    • Privacy Act of 1974
      • Severely limits federal government ability to disclose private information to other people or agencies without written consent of affected individuals
      • Only applies to government agencies
    • Electronic Communication Privacy Act of 1986
      • Make sit a crime to invade the electronic privacy of an individual
  • Communications Assistance for Law Enforcement Act (CALEA) of 1994
    • Requires communications carriers to make wiretaps possible
  • Health Insurance Portability and Accountability Act of 1996 (HIPAA)
    • Governs health insurance and health maintenance organizations
    • Hospitals, physicians, insurance companies
    • Defines the rights of individuals
  • Health Insurance Technology for Economic and Clinical Health Act of 2009 (HITECH)
    • Change in the ways laws treats business associates
    • New data breach notification requirements
    • Must notify Health and Human Services and the media when a breach impacts more than 500 individuals
  • Children’s Online Privacy Protection Act of 1998 (COPPA)
    • Websites must have a privacy notes
    • Parents need an opportunity to review any information collected from their children
    • Parents must give verifiable consent to collection of information of children under the age of 13 before it’s collected
  • Gramm-Leach-Bliley Act of 1999 (GLBA)
    • Banks, insurance companies, credit providers
    • Information the above can share with each other
  • USA PATRIOT Act of 2001
    • The way government can obtain wiretapping authorization
    • ISPs may provide government with a large range of information
  • Family Educational Rights and Privacy Act (FERPA)
    • Educational institutions that accept any form of funding from federal government
  • Privacy in the workplace
    • There is a reasonable expectation of privacy when you mail a letter, etc.
    • Employees do not have a reasonable expectation of privacy while using employer-owned communications equipment in the workplace
    • Consider:
      • Clauses in employment contracts
      • Acceptable use and privacy policies
      • Logon banners
      • Warning labels
  • European Union General Data Protection Regulation (GDPR)
    • May 28, 2018
    • Widened scope of regulation
    • Applies to all organizations that collect data from EU residents or process it on behalf of someone else
    • Applies to organizations that are not based in the EU
    • Key provisions
      • Breach notifications must occur within 24 hours
      • Individuals will have access to their own data
      • Right to be forgotten - able to delete data

Compliance

  • Payment Card Industry Data Security Standard (PCI DSS)
Read More

CISSP Study Notes Chapter 3 - Business Continuity Planning

This chapter discusses how to identify, analyze, and prioritize business continuity requirements, including developing the scope and the plan. It also talks about business impact analysis, and how to participate in BC planning and exercises.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 3: Business Continuity Planning

My key takeaways and crucial points

Planning for Business Continuity (BC)

  • Business Continuity Planning - BCP. The goal is to implement policies, procedures and processes so that a disruptive event has as little impact on the business as possible.
  • BC activities are focused at a high level and center on business processes (strategic)
  • DR (disaster recovery) focuses on technical activities (tactical)
  • Overall BCP goal is providing a quick, calm, efficient response to emergencies, enhance ability to recover from a disruption

Business Organization Analysis

  • Operational departments - responsible for core services
  • Critical support services - other groups responsible for systems upkeep
  • Corporate security teams - first responders
  • Senior executives - ongoing viability of the organization
  • Be sure to account for headquarters and also branch offices

BCP Team Selection

  • Representatives from each org’s departments
  • Team members from the functional areas
  • IT subject matter experts
  • Cybersecurity
  • Facility management
  • Attorneys
  • Human resources
  • Public relations
  • Senior management
    • Without an active senior management presence, your BCP plan will suffer or fail

Resource Requirements

  • Four elements
    • Project scope and planning
    • Business impact assessment
    • Continuity planning
    • Approval and implementation
  • BCP is often expensive, requires a large amount of resources
    • Being out of operation is more expensive - identify the costs incurred by the business for each day that the business is down
  • Public firms have a fiduciary responsibility to have these plans, also financial intuitions, pharma manufacturers
  • Service level agreements likely obligate a BCP

Business Impact Assessment (BIA)

  • Identifies resources that are critical to an organization’s ongoing viability and threats posed to those resources
  • Quantitative decision making - Use of numbers
  • Qualitative decision making - Non-numerical factors

Identify Priorities

  • Gather input from all parts of the org
    • Everybody has different priorities
    • What IT/security/management thinks is mission critical may not be what the sales/finance people think is critical
  • Quantitative metrics
    • Asset value - AV. Monetary value of each asset
    • Maximum tolerable downtime - MTD. Max length of time a business function can be inoperable before it harms the business.
    • Maximum tolerable outage - MTO. Same metric as MTD.

Risk Identification

  • Natural risks vs man-made risks
  • The below are all considered natural threats but might trip you up
    • Prolonged power outages
    • Building collapses
    • Transportation failures
    • Internet disruptions
    • Service provider outages

Likelihood Assessment

  • Annualized rate of occurrence - ARO. A percentage that represents how many times in one year an event is likely to occur.
    • 0.5 = once every two years
    • 12 = once every month

Impact Assessment

  • Exposure factor - EF. Amount of damage a risk poses to the asset. Percentage of the asset’s value.
    • If a fire reduces the value of a business by 70%, the EF is 70.
  • Single loss expectancy - SLE. Monetary loss that is expected each time the risk materializes.
    • SLE = AV x EF
  • Annualized loss expectancy - ALE. Monetary loss that the business expects to result of a risk harming the asset over the course of a year.
    • ALE = SLE x ARO
    • Allows you to budget annually for countermeasures for threats that may occur on a non-annual basis.

Resource Prioritization

  • Create a list of all risks
  • Sort by descending order according to ALE

Continuity Planning - Strategy Development

  • Depends on the prioritized list of concerns according to ALE
    • Determines what risks are most important to the business
  • Refer to MTD estimates

People

  • PEOPLE ARE ALWAYS FIRST
    • Ensuring that the people in your organization before, during, and after an emergency is always the first priority
    • People are your most valuable asset
    • Any time human safety is a possible answer to a question, you must remember that human safety is always the most important

Buildings and Facilities

  • Hardening provisions - Protects existing facilities against risks defined in strategy development phase
  • Alternate sites - If you can’t harden a facility, BCP should identify an alterative site where business activities can resume asap

Infrastructure

  • Physical hardening systems - Protective measures
  • Alternative systems - Redundancy

Plan Approval and Implementation

  • Senior management approval and buy-in is essential to the success of the overall BCP effort

Plan Implementation

  • BCP team should supervise the conduct of an appropriate BCP maintenance program and ensure that the plan remains responsive to evolving risk

Training and Education

  • Personnel who are involved in the plan need training
    • On the overall plan
    • On their specific responsibilities
  • Everyone in the org
    • Plan overview
  • People with direct BCP responsibilities
    • Trained and evaluated on their tasks
    • At least one backup person needs training for each task

BCP Documentation

  • Written continuity document
  • Historical record of the BCP process for future personnel to understand the current process
  • Documents reviewed by people outside the BCP team for a “sanity check”

Continuity Planning Goals

  • The plan should describe the goals of the continuity planning
  • Ensure continuous operation of the business in the face of an emergency
  • Meeting SLAs and KPIs
  • Continuity plan elements
    • Statement of Importance
      • Reflects how critical the BCP is
    • Statement of Priorities
      • Identify priorities phase of business impact assessment
      • List functions that are critical to continued business operations in a prioritized order
    • Statement of Organizational Responsibility
      • Basically echoes the sentiment that “business continuity is everybody’s responsibility”
    • Statement of Urgency and Timing
      • Outlines the implementation timetable
    • Risk Assessment
      • Recaps the decision making process from BIA
      • For quantitative analysis, the AV, EF, ARO, SLE, and ALE figures should be included
      • For qualitative analysis, the thought process behind the risk analysis should be included
    • Risk Acceptance/Mitigation
      • Risks that are deemed acceptable - list the reasons
      • Risks that are deemed unacceptable - list provisions and processes in place to reduce the risk
      • Resist the statement “we accept this risk” and force the documentation of risk acceptance decisions
        • Force accountability for risk planning/acceptance
        • Protect yourself from auditor scrutiny
    • Vital Records Program
      • States where critical records will be stored
      • How to make backups and store them
    • Emergency Response Guidelines
      • Organization and individual responsibilities for emergency response
      • Immediate response procedures for security, safety, fire suppression, notification of emergency response agencies
      • A list of individuals who should be notified of incident
      • Secondary response procedures for first responders to follow while waiting for BCP team to assemble
      • Should be easily accessible

Maintenance

  • BCP should not be disbanded, should continue to meet periodically
  • Practice good version control
  • Include BCP responsibilities in job descriptions, include in performance review process
Read More

CISSP Study Notes Chapter 2 - Personnel Security and Risk Management Concepts

This chapter is about working with risk. Human weaknesses are discussed at the beginning and end of the chapter in the form of job descriptions, and then about training. This chapter also discusses how to think about risk, define risks correctly, and how to think about countermeasures and other responses to risk.

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app
  • I attended a study bootcamp
  • I did a bunch of practice tests

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

Twitter (@MrThomasRayner) told me there is interest in seeing my study notes. So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Previous Chapters

Chapter 2: Personnel Security and Risk Management Concepts

My key takeaways and crucial points

Personnel Security Policies and Procedures

  • Humans are the weakest element in any security solution.
  • Making a job description involves setting a classification for the job, screening candidates, hiring and training.
  • Job descriptions are important to a security solution.
    • What kind of security access is needed to perform job responsibilities?
  • Separation of duties - Critical, significant, and sensitive work tasks are divided among several individuals, preventing one person from having any ability to undermine or subvert security mechanisms.
  • Principle of least privilege - Giving people the least amount of access and privileges that still allows them to perform their duties.
    • Protects against collusion - when two or more people work together to undermine security for the purposes of fraud, theft, or espionage.
  • Job responsibilities - Specific work tasks.
  • Job rotation - Rotating employees among multiple job positions. Provides knowledge redundancy, and reduces risk of fraud, theft, misuse, etc.
    • If misuse occurs, someone else performing the job will detect it. Additionally, misuse becomes more likely as people become increasingly familiar with their work tasks and how their privileges may be abused.
  • Cross-training - Workers are prepared to perform other job positions, but they are not rotated.
  • Job descriptions are not just for the hiring process, but need to be maintained throughout the life of the organization.
  • Onboarding - The process of adding new employees.
  • Offboarding - The reverse of onboarding.
  • Terminations should take place with at least one witness. People who are terminating should be escorted off the premises. Organization-specific identification and access must all be revoked and collected.
  • Exit interview - A meeting that takes place with an offboarding employee.
    • Make sure all organization equipment and supplies are returned
    • Remove or disable user accounts
    • Human resources remuneration gets handled (pay unused vacation time, etc.)
    • Arrange for security department to accompany employee while they gather personal belongings
    • Inform security personnel the ensure ex-employee doesn’t attempt to reenter the building without an escort
    • Remind the employee of liabilities and restrictions based on employment agreement, nondisclosure agreement, etc.

Vendor, Consultant, and Contractor Agreements and Controls

  • Vendor controls are used to define levels of performance and expectations for organizations and people external to the primary organization.
  • Service level agreement - SLA. The document that often defines these expectations. For instance:
    • System uptime
    • Maximum consecutive downtime
    • Peek load
    • Responsibility for diagnostics
    • Failover time
  • SLAs commonly contain financial and other remedies for violations. Should include a focus on protecting and improving security.

Privacy Policy Requirements

  • Some definitions for privacy
    • Active prevention of unauthorized access to information
    • Freedom from unauthorized access
    • Freedom from being observed
  • Personally identifying information - PII. Any data that can be easily and/or obviously traced back to the person of origin or concern.
    • Examples: Phone number, email address, social security number.
    • NOT PII: MAC address, operating system type, high school mascot.
    • Student ID numbers are not PII. They can exist at different institution.
  • Privacy legislation:
    • HIPAA - Health Insurance Portability and Accountability Act
    • SOX - Sarbanes-Oxley Act of 2002
    • FERPA - Family Educational Rights and Privacy Act
    • Gramm-Leach-Bliley Act
    • GDPR - General Data Protection Regulation
    • PCI DSS - Payment Card Industry Data Security Standard

Security Governance

  • Security governance - Supporting, defining, directing security efforts.
    • Third-party governance may be mandated by law, regulation, standards, etc.
  • Documentation review - Performed before any on-site inspection takes place.
  • ATO - Authorization to Operate - related to military and government contracts.

Understand and Apply Risk Management Concepts

  • Risk - The possibility that something to happen to damage, destroy, or disclose data or other resources.
  • Risk management - Process of identifying factors of risk and evaluating them in light of data value and countermeasure cost, and implementing cost-effective solutions for mitigating or reducing risk.
    • Supports the mission of the organization.
    • Reduce risk to an acceptable level.
  • Asset - Anything that should be protected. Something important enough to protect.
  • Asset valuation - A dollar value assigned to an asset.
  • Threats - Any potential occurrence that may cause an unwanted outcome for a specific asset.
  • Vulnerability - A weakness in an asset or the absence of a safeguard.
  • Exposure - Being susceptible to asset loss due to a threat.
  • Risk - The possibility that a threat will exploit a vulnerability and cause harm to an asset.
    • Risk = Threat * Vulnerability
  • Safeguards - A security control, countermeasure, or anything else that removes or reduces a vulnerability or protects against one or more threats.
  • Attack - The exploitation of a vulnerability by a threat agent.
  • Breach - When a security mechanism is bypassed or thwarted.

Risk Assessment/Analysis

  • Primarily an exercise for upper management.
  • No way to eliminate 100% of risks. Only reduce them to an acceptable level.
  • Quantitative risk analysis assigns real dollar figures to the loss of an asset.
    • Ex: You’d lose $15,000 if a car was stolen from your fleet.
    • Steps:
      1. Assign an asset value (AV) - inventory assets
      2. Calculate the exposure factor (EF - the percentage of loss experienced if a risk is realized) and single loss expectancy (SLE - the cost associated with a single realized risk against an asset)
      3. Probability that threat will occur within 1 year (ARO - annualized rate of occurrence. Once every two years is 0.5. Every month is 12.) - threat analysis
      4. Find the annualized loss expectancy (ALE - possibly yearly cost of all instances of a realized threat), calculate changes to ARO and ALE based on applied countermeasures. ALE = SLE * ARO. How much it would cost if this happened, times how likely that thing will happen within one year, equals your ALE.
      5. Perform a cost/benefit analysis for each countermeasure. ALE before safeguard minus the ALE after implementing the safeguard, minus the annual cost of the safeguard equals the value of that safeguard. If the value is negative, it’s not worth it.

Fun Fact! I did a session on how to get everything you’ve ever wanted from your stakeholders called “How to Take Risks Without Getting Clawed in the Face” that covers using a quantitative risk analysis approach to win every “I want something” conversation you’ll ever have.

  • Qualitative risk analysis assigns subjective and intangible values to the loss of an asset.
    • Ex: You’d lose the trust of your customers if you customer contact information was exposed.
    • Scenario-based. Scenarios are a written description of a single major threat.
    • Delphi technique - Anonymous feedback-and-response processed used to enable a group to reach an anonymous consensus. Elicits honest and uninfluenced responses.
    • May also perform brainstorming, focus groups, surveys, etc.

Risk Responses

  • Risk mitigation - Implementation of safeguards to eliminate or block threats.
  • Risk assignment - Transferring risk so that the cost of a loss falls onto another entity or organization. Such as purchasing insurance or outsourcing.
  • Risk acceptance - Cost/benefit analysis shows that countermeasure costs outweigh the possible cost of loss from a risk, so you may just decide that you’ll do nothing about the risk other than acknowledge it and periodically revisit your decision.
  • Risk deterrence - Implementing deterrents like auditing, security cameras, warning banners.
  • Risk avoidance - Selecting an alternate option that has less risk.
  • Risk rejection - Unacceptable but possible, ignore the risk, deny it exists, hope it is never realized.
  • Residual risk - Comprises threats to specific assets against which one chooses not to implement a safeguard. Risk that is accepted.
  • Total risk - Risk faced if no safeguards were implemented.
    • Threats x Vulnerabilities x Asset value = Total risk

Countermeasure Selection and Implementation

  • Cost of countermeasure should be…
    • less than cost of asset
    • less than benefit of countermeasure
  • Result should make cost of an attack greater than derived benefits
  • Countermeasure should provide a solution to a REAL problem. Not just sound cool.
  • Benefit of countermeasure should not depend on its secrecy. That’s security through obscurity.
  • Countermeasures should…
    • Be testable and verifiable
    • Provide consistent and uniform protection
    • Have few or no dependencies
    • Require minimal human intervention after deployment
    • Be tamperproof
    • Have overrides for privileged operators only
    • Provide fail-safe and/or fail-secure options

Types of Controls

  • Technical - hardware or software
  • Administrative - policies and procedures
  • Physical - items you can physically touch
  • Security controls:
    • Deterrent - Discourage violation of security policies
    • Preventative - Thwart or stop unwanted activity
    • Detective - Discover or detect unwanted activity
    • Compensating - Provide options to other existing controls
    • Corrective - Modifies the environment to return systems to normal after unwanted activity
    • Recovery - Extension of corrective, but more advanced.
    • Directive - Direct, confine, or control actions of subjects to enforce or encourage compliance.
  • Security control assessment - SCA. Formal evaluation of security infrastructure’s individual mechanisms against a baseline or expectation. See NIST 800-53A.

Continuous Improvement

  • Risk analysis is performed to provide management with details needed to decide how to respond to risk.
  • Risk analysis/assessment is always “point in time” and must be periodically revisited.

Risk Frameworks

  • See NIST 800-37 for Risk Management Framework (RMF) which has these steps:
    • Categorize information
    • Select initial set of baseline controls
    • Implement controls
    • Assess the controls
    • Authorize information system operation based on determination of the risk
    • Monitor controls in an ongoing basis

Establish and Maintain a Security Awareness, Education, and Training Program

  • Awareness - prerequisite to security training. Make security a recognized entity for users.
  • Training - teaching employees to perform their work tasks and to comply with security policy.
  • Education - more detailed training where users learn much more than they actually need to know to perform their work tasks.

Manage the Security Function

  • Measurable security means that there is a clear benefit provided, and one or metrics are recorded and analyzed.
Read More

CISSP Study Notes Chapter 1 - Security Governance Through Principles and Policies

Last summer I spent about a month studying for and getting my Certified Information Systems Security Professional (CISSP) certification from ISC2. I went about studying for the test a few ways:

  • I used the PocketPrep app - check your phone’s app store. I paid for it ($40 at the time) and did all of the questions. I found it was a great way to study in smaller durations of free time.
  • I attended a study bootcamp - my employer had a bunch of us going through the process of studying for the CISSP exam at the same time, and hired a professional to put us through a week long bootcamp. I found it super helpful to get “insider” tips on how the exam is constructed, and to hear tales of the test from someone who has written and passed it many times.
  • I did a bunch of practice tests - these came with my book.

And finally…

  • I got the ISC2 CISSP official study guide - I read it cover to cover, and highlighted and annotated the entire thing.

I took to Twitter (@MrThomasRayner) to find out if there was interest in a series about my takeaways and crucial points from each chapter in the ISC2 CISSP official study guide, and found that there was.

Twitter poll showing interest in this series

So, here we go! Welcome to my 21 part series on the takeaways and crucial points from each chapter in the ISC2 CISSP official study guide. To be clear, this isn’t a replacement for all those other study methods I mentioned above. This is just a supplement. This also isn’t everything you need to know for the test. This is just what I feel are the most important points.

It’s important to remember that while many of these terms and phrases have different meanings in different contexts, the definitions I’m providing below are the ones that are relevant in the CISSP exam. Your own training or experience may tell you that a definition is incorrect or invalid, but if you want to get the exam questions right, you’ll have to know them as they’re defined in the books and study material.

The CISSP exam is often said to be “a mile wide but only an inch deep” which means you need to know a little bit about a lot of stuff. Accordingly, these posts contain a lot of points and while you might not be questioned on all of them, you could be questioned on any of them. It’s important to have a good grip on every chapter in its entirety.

Chapter 1: Security Governance Through Principles and Policies

What this chapter is about

This chapter introduces security governance, management concepts, and principles which are inherent elements in security policy and in solution deployment. It covers a swath of foundational information that serves many of the other chapters in the book, like the CIA Triad and data classification.

Later, Chapter 1 discusses threat modeling, after it covers security control frameworks.

My key takeaways and crucial points

The CIA Triad

  • The most important security principle is the CIA triad. It stands for “Confidentiality”, “Integrity”, and “Availability”.
  • Confidentiality
    • The secrecy of data, objects or resources.
    • Encryption is a popular control for managing confidentiality.
    • An object is a “passive element” - files, computers, network connections, applications.
    • A subject is an “active element” - users, programs, computers (yes computers can be both active and passive depending on context).
  • Confidentiality terms:
    • Sensitivity - The attribute of information that could cause harm or damage if it was disclosed.
    • Discretion - Where one can influence or control data.
    • Criticality - The level to which data is “mission critical”.
    • Concealment - The act of hiding data to prevent disclosure. Sometimes this means “security through obscurity”.
    • Privacy - The confidentially of data that is personally identifiable or that might cause harm, embarrassment, or disgrace to someone if revealed.
    • Seclusion - Storing something in an out-of-the-way location.
    • Isolation - Keeping something separate from others.
  • Integrity
    • The veracity/accuracy of data.
    • Assurance that data can be intentionally modified only by authorized subjects.
    • Cryptographic signing is a popular control for managing integrity.
    • Integrity means:
      • Preventing unauthorized subjects from making modifications.
      • Preventing authorized subjects from making unauthorized modifications, even mistakes.
      • Maintaining consistency of data so that they are true and correct.
    • Integrity depends on confidentiality.
  • Integrity terms:
    • Accuracy - Being correct and precise.
    • Truthfulness - Being a true reflection of reality.
    • Authenticity - Being authentic or genuine.
    • Validity - Being factually or logically sound.
    • Nonrepudiation - Not being able to deny having performed an activity, being able to verify the origin of something.
    • Accountability - Being responsible for actions and results.
    • Responsibility - Being in charge or having control over something.
    • Completeness - Having all needed parts.
    • Comprehensiveness - Being complete in scope.
  • Availability
    • The timely and uninterrupted access to objects.
    • Denial of service attacks are attacks on availability.
  • Availability terms:
    • Usability - The state of being easy to use or learn or being able to be understood and controlled.
    • Accessibility - Assurance that the widest range of subjects can interact with a resource regardless of their capabilities.
    • Timeliness - Being prompt, on time, or within a reasonable timeframe.

Other Security Concepts

  • AAA services provide “Authentication”, “Authorization”, and “Accounting” (aka “Auditing”).
  • Identification - Claiming to be an identity.
  • Authentication - Proving that you are that identity you claim.
  • Authorization - Defining permissions of a resource and object access for a specific identity.
  • Auditing - Recording a log of events and activities. Can be used to detect unauthorized activity or abuse.
  • Accounting - Reviewing log files to check for compliance. Linking a human to the activities of an electronic identity.
  • “Monitoring” is a type of watching or oversight, while “auditing” is a recording of information to a record or file.
  • Layering - aka “Defense in Depth”. The use of multiple controls in a series.
    • Performing security restrictions in series means they are performed one after another in a linear way.
  • Abstraction - Similar elements are put into groups or roles that are assigned security controls as a collective.
    • Classifying objects.
  • Data hiding - Preventing data from being discovered or accessed by a subject by positioning the data in a logical storage compartment that is not accessible or seen by the subject.
    • “Security through obscurity” is different. Data hiding is intentionally positioning data out of view, while security through obscurity is not informing a subject about an object being present.
  • Encryption - Hiding the meaning or intent of a communication from unintended recipients.

Evaluate and Apply Security Governance Principles

  • Security governance - Supporting, defining, and directing the security efforts of an organization.
    • NIST 800-53 or 800-100 standards apply here.
  • Governance is responsible for security policies
  • Business case - A documented argument or stated position. Helps to make a decision or take some action by demonstrating a business-specific need.
  • Top-down approach - One of the most effective ways to tackle security management planning.
    • Upper management is responsible for initiating and defining policies.
    • Middle management is responsible for fleshing out policies into standards, baselines, guidelines and procedures.
    • Operational management then must implement the configurations as prescribed.
    • End users must comply with security policies.
    • Opposed by the bottom-up approach where IT staff make security decisions without input from senior management.
  • Security management being the responsibility of upper management illustrates that it is an issue of business operations rather than IT administration.
  • Any and all security management plans fail without senior management approval and commitment to the security policies.
  • Types of plans:
    • Strategic plan - Executives are responsible. A long-term plan that is fairly stable. It should include a risk assessment.
    • Tactical plan - Managers are responsible. Midterm plan developed to provide more details on accomplishing the strategic plan’s goals. Includes and schedules the tasks necessary to accomplish organizational goals.
    • Operational plan - Employees are responsible. Short-term, highly detailed plan based on strategic and tactical plans. Define how to accomplish the various goals of the organization. Includes resource allotments, budgets, staff assignments, schedules, and step-by-step procedures.
  • Security governance is a continuous process.
  • Change control/management - Ensure that any change does not lead to reduced or compromised security, and the possibility to roll back any change to a previously secured state.
    • Change management is crucial for maintaining security by managing change systemically.

Data Classification

  • Data classification - The primary means by which data is protected is based on its need for secrecy, sensitivity or confidentiality.
    • This is the process of organizing items, objects, subjects, etc. into groups and categories with similarities.
    • Securing all data at the same level is inefficient. Securing everything at a low level means sensitive data is exposed. Securing everything at a high level is too expensive and restricts access to some data unnecessarily.
  • Declassification - Once an asset no longer warrants or needs the protection of its assigned level, it needs to be declassified.
  • Government/military classification
    • Top secret - Highest level. Unauthorized disclosure will have drastic effects and cause grave damage. It’s compartmentalized on a need-to-know basis. A user could have top-secret clearance and have no access to data until they need to know about it.
    • Secret - Unauthorized disclosure has significant effects and cause critical damage.
    • Confidential - Unauthorized disclosure has noticeable effects and cause serious damage.
    • Sensitive but Unclassified - Used for data that is for office use only. Unauthorized disclosure could violate the privacy rights of individuals.
    • Unclassified - Data that is neither sensitive nor classified.
    • Top secret, secret, and confidential are “classified” categories. Sensitive but Unclassified, and Unclassified are “not classified” categories.
  • Commercial business/private sector classification
    • Confidential - Highest level of classification. Proprietary data, disclosure has drastic effects on competitive edge of an organization.
    • Private - Private or personal nature, intended for internal use only.
    • Sensitive - More classified than public data. A negative impact would occur if disclosed.
    • Public - Lowest level of classification. Used for all data that does not fit a higher classification.
  • Ownership - The formal assignment of responsibility to an individual or group. Owners often have full capabilities and privileges over the objects they own.

Organizational Roles and Responsibilities

There is a lot to know about each of these roles. Here are the key points about each one.

  • Senior manager - Organization owner. Must sign off on all policy issues.
  • Security professional - A trained and experienced engineer who is responsible for following the directives of management.
  • Data owner - Responsible for classifying information so it may be placed and protected within the security solution.
  • Data custodian - Implements the prescribed solution. Perform all activities needed to provide protection.
  • User - A person who has access to a secured system. Responsible for understanding and upholding the security policy.
  • Auditor - Reviews and verifies that the security policy is properly implemented and adequate.

Security Control Frameworks

  • Security control frameworks are standards that help plan an overall security solution.
  • COBIT - Control Objectives for Information and Related Technology.
    • 5 key principles:
      1. Meeting stakeholder needs
      2. Covering the enterprise end-to-end
      3. Applying a single, integrated framework
      4. Enabling a holistic approach
      5. Separating governance from management
    • Crafted by the Information Systems Audit and Control Association (ISACA).
  • ISO/IEC 27002, ITIL are other standards to know about.

Due Care and Due Diligence

  • Due care - Means “correction”. Using reasonable care to protect interests of an organization.
  • Due diligence - Means “detection”. Practicing the activities that maintain the due care effort.
  • Showing both due care and due diligence are the only ways to disprove negligence in an occurrence of loss.

Developing Documents

  • Security Policies
    • Defines the scope of security needed.
    • Discusses the assets that need security and the extent that security solutions should go to provide the protection.
    • Defines all relevant terms.
  • Acceptable Use Policy - Designed to assign security roles within the organization as well as ensure the responsibilities tied to those roles.
  • Standard - Defines compulsory requirements.
  • Baseline - Minimum level of security. Operationally focused.
  • Guideline - Offers recommendations.
  • Security Procedures
    • Step-by-step how-to documentation.
    • Not all users need to know all parts of every standard, baseline, guideline and procedure.
    • Avoid creating one giant monolithic document.

Threat Modeling

  • The process where potential threats are identified, categorized and analyzed.
  • Not a single event.
  • Proactive approach takes place during development. AKA a defensive approach.
  • Reactive approach takes place after a product has been created and deployed.
  • Identifying threats - methods:
    • The focus in threat modeling is on objects
    • Focused on assets - Uses asset valuation results and tries to identify valuable assets.
    • Focused on attackers - Try to identify potential attackers and identify their goals.
    • Focused on software - Considering threats against in-house developed software.
  • STRIDE is a scheme developed by Microsoft for threat categorization.
    • Spoofing - Using a falsified identity.
    • Tampering - Unauthorized changes.
    • Repudiation - Plausible deniability.
    • Information disclosure - Distribution of information to external or unauthorized entities.
    • Denial of service (DoS) - Prevents authorized use of a resource.
    • Elevation - A limited user account is used to gain greater access.
  • Process for Attack Simulation and Threat Analysis (PASTA) is another methodology.
    1. Definition of the objectives (DO) for analysis of risks
    2. Definition of the technical scope (DTS)
    3. Application decomposition and analysis (ADA)
    4. Threat analysis (TA)
    5. Weakness and vulnerability analysis (WVA)
    6. Attack modeling and simulation (AMS)
    7. Risk analysis and management (RAM)
  • TRIKE is another methodology that focuses on a risk-based approach.
  • Disaster, Reproducibility, Exploitability, Affected Users, and Discoverability (DREAD) is another methodology.
  • Visual, Agile, and Simple Threat (VAST) based on Agile project management and programming principles.
  • Reduction analysis - Decomposing the application, system or environment into smaller containers or compartments. The purpose is to gain a greater understanding of the product and its interactions with other elements.
    • Trust boundaries - Any location where the level of trust or security changes.
    • Data flow paths - The movement of data between locations.
    • Input points - Locations where external input is received.
    • Privileged operations - Any activity that needs greater than standard privileges.
    • Details about security stance and approach - The declaration of policy, foundation and assumptions.

Prioritization and Response

  • Define the means, target, and consequences of a threat.
  • Rank threats using a probability x damage potential calculation.
  • Rankings can be subjective and arbitrary sometimes, but should at least be consistent.
  • Use a “high/medium/low” scale for each element of the calculation for simplicity if you must.
  • High-priority items need to be addressed immediately.
  • DREAD provides a rating system designed to be flexible:
    • Damage potential - How severe is the damage likely to be?
    • Reproducibility - How complicated is it for attackers to reproduce the exploit?
    • Exploitability - How hard is it to perform the attack?
    • Affected users - How many users are going to be affected (as a percentage)?
    • Discoverability - How hard is it for an attacker to discover this weakness?

Apply Risk-Based Management Concepts to the Supply Chain

  • Secure supply chain - Where all vendors and links are reliable, trustworthy, reputable organizations.
  • All links disclose their practices and security requirements to their business partners, but not necessarily to the public.
Read More

What's going on with the PowerShell Summit recordings?

Important Update

The below post was based off of conversations that occurred in the community, and what can only be considered at best to be incomplete information. It’s important to me that I’m part of a solution to the miscommunication problems that got us to this point.

James Petty, who is the CEO of the DevOps Collective just issued a post on this subject that clears up a lot of the misconceptions and miscommunications. It’s pretty short, so I’ll reproduce it here to make sure you definitely see it before looking at the rest of this post.

The decision was made to partner with Pluralsight to do our video recordings in 2020. Pluralsight has offered to save us $40K-$50K (per event) by having their team do the recordings of the session. There was never any condition on their part that the recordings be hosted exclusively behind their paywall, this decision was made by the DevOps Collective. In fact, other events that they work with have their session recordings available to the public for free. After better understanding everyone’s perspectives on the subject, we’ve indicated to Pluralsight that we would like to use a different model instead, and they’re very happy to oblige us. They’ve made it clear that they’re happy to work with an event on whatever terms that event needs for its community, and we appreciate their flexibility and willingness to support us.

Give us a few days to get everything worked out and will make an announcement soon on the 2020 session recordings.

I appreciate the tough spot that James and his crew were put into here. This unraveled really quickly, and there are no easy solutions to the problems the Summit organizers are working to solve.

The vast majority of the uproar in the PowerShell community over the last couple of days has been based off the incorrect understanding that “Pluralsight will own the recordings and will decide what happens to them - probably going to put them behind a pay wall”. As James’ post outlines, this isn’t the case. It also appears that nothing is nearly as written in stone as many folks in the community previously thought it was. The right move at this point seems to be to chill out and trust that the good people behind all of the organizations involved will do the right thing. I’m all for holding people and organizations accountable, but there’s obviously a lot more about this situation that needs to be settled and communicated with the public. For my part in perpetuating the misinformation surrounding this subject, I apologize. Hopefully readers understand my original post below was based off of the information I had at the time, and appreciate my effort to correct it here.

The entire original post is below, with only a couple of formatting changes. The content itself is unchanged, since I don’t think there’s any point erasing it. Besides, my whole blog is on GitHub Pages so it’s all on the internet forever and ever anyway.


Here I go with my second post of 2019. I’ve certainly wanted to be more prolific, but it’s hard to blog about the work I perform daily for reasons I won’t get into. I’m really hopeful that I’ll return to an “every other week” cadence, but… we’ll see. Anyway, there’s been some news that I’m feeling compelled to provide my thoughts on.

In case you missed it, the folks who put on the PowerShell + DevOps Global Summit have just announced some changes about the recordings for the 2020 event. Simply put, in previous years, the DevOps Collective (the folks who organize and put on the Summit) have paid to record the sessions presented at the event with funds from sponsors and ticket sales. Those recordings have been uploaded to their YouTube channel and are therefore obviously shared for free with the community forever. In 2020, however, Pluralsight has purchased the rights to the recordings of these sessions and appears intent on hosting them behind a pay wall. Only people who attended the Summit and people with a Pluralsight subscription will be able to watch the recordings.

Full disclosure: I have some kind of a relationship with both Pluralsight and the DevOps Collective. I have authored two courses on Azure Automation for Pluralsight which I was paid for and continue to receive royalties on. You should watch them if you want to learn about Azure Automation. I am really proud of them. Occasionally, I buy and sell tiny amounts of Pluralsight stock. I’ve also presented at the previous two PowerShell Summits, and am looking forward to presenting again in 2020. I’ve received a stipend from the Summit for the session I presented in 2018, but since joining Microsoft, such compensation has ceased.

These changes to how the recordings from Summit are being handled have sparked a lot of discussion, and now I shall try to assemble my thoughts in one post. One thing that should be made absolutely clear is that although people are very passionate about this matter, and the community reaction has been largely negative, neither I nor anybody I’ve spoken with about this hold any anger towards the people behind these decisions on either side. I am lucky to know many of them personally and have nothing but great things to say about them. That said, there’s a lot to unpack here.

The Good

The good news here is there’s more money involved and that enables a lot of good things.

Good for Pluralsight

The obvious win here for Pluralsight is that they get exclusive access to incredible content shared by some of the most brilliant minds in this field. The content shared by Summit speakers is regularly the best new technical video content shared every year. Events like Summit, PSConfEU, PSConfAsia, and a small handful of user groups, are the only places you can depend on seeing these kinds of in-depth, intensely technical sessions. Putting this content behind a pay wall is clearly a way to generate revenue for Pluralsight. If you sign up for a trial to watch Summit recordings, maybe you’ll stick around and watch some other courses, and convert to being a paying customer.

Good for DevOps Collective

Warren’s post linked above indicates that the DevOps Collective will save about $100,000 USD between their two 2020 events. That’s a ton of money for this non-profit. Warren does a great job of outlining the benefits for Summit in his post, so I won’t reiterate them here, but I completely trust that the DevOps Collective is doing their absolute best to use these funds to put on the best event possible.

Professional recordings of conference sessions is expensive. Unless you’ve tried to organize recordings for your own conference, it definitely costs more than however much you think it costs. I don’t know exactly why companies are able to charge so much money to record conference sessions, but they do. This isn’t even just a Bellevue thing. It’s expensive to record conferences everywhere in the world.

Good for Speakers

This is the group I identify with. The best part of this decision for speakers is that there are going to be more of us in 2020. The compensation for speaking at Summit is a free ticket to Summit. The more free speaker tickets the DevOps Collective gives away, the more other attendees and sponsors have to pay to help cover those costs. More speakers (hopefully) means a more diverse group on stage, and therefore a broader exposure of thoughts and ideas for everyone.

More speakers also means that each speaker may present fewer sessions. Personally, I love the idea of presenting as many times as the organizers will let me, but that’s obviously self-indulgent, and also probably not great for the aforementioned diversity of ideas and thoughts. Speaking at Summit is a Big Deal™, especially for folks who are less established in the community, and if speaker stress can be reduced by limiting the number of multi-session presenters, then that’s a good thing which obviously costs money.

Having recorded examples of technical sessions is a big deal for speakers. It’s big for one’s personal brand, their status in the community as a trusted source of information, and is a key element in a speaker’s work to get more sessions accepted at more events. The YouTube uploads of the Summit session recordings usually get a few hundred to a few thousand views. The DevOps Collective doesn’t have a ton of resources to spend marketing them, and so it falls on the speakers to spread their session recordings around. It stands to reason that a for-profit company with a more obvious financial interest in making sure these recordings get watched would put more effort and resources into marketing. Although the recordings would be behind a pay wall, it’s possible (but not proven) that more people would actually see them.

Good for Community

By “community” I mean people who may or may not be attending Summit. If you’re attending Summit, you’ll get access to the recordings. This is huge because Summit runs multiple sessions at the same time and it’s often impossible to choose between different sessions that run against each other. I’ve personally had the experience of building my schedule, having a hard time choosing a session for a specific time slot, and then realizing I should probably attend the session that I was going to be presenting. Having a substantial financial interest in making sure that session recordings are available for paying customers and Summit attendees is in the best interest of Pluralsight, which assures Summit attendees that some of the “do I go here or there” challenges aren’t such a big deal because the recordings will be available after.

If you’re not attending Summit, there’s no good here for you at all. Speaking of which, now for…

The Bad

No matter which way you look at this, putting the Summit session recordings behind a pay wall has some absolutely impossible to ignore drawbacks.

Bad for Pluralsight

I’ll be blunt, not out of anger, but out of clarity: Pluralsight looks greedy here.

But, why shouldn’t they be? Pluralsight is a publicly traded company with a fiduciary responsibility to its shareholders to make decisions that are profitable. A somewhat modest, transparent display of greed isn’t really something that I personally fault Pluralsight for. They saw a business opportunity and seized it, negotiating terms that were favorable for Pluralsight. That’s exactly what they’re supposed to do.

Pluralsight will own the rights to the recordings of Summit sessions, and has made it clear that Pluralsight subscribers and Summit attendees will be able to watch them. Right now, it appears that these are the only groups of people for whom Summit session recordings will be made available, although nobody has actually come out and said publicly that the general community at large won’t be able to watch them. Pluralsight has some content available to watch for free, and there’s an opportunity here for Pluralsight to add the Summit session recordings to that library. One compromise that I’m personally a fan of is for Pluralsight to host the Summit sessions for its members and Summit attendees only for a few months, and then make them public.

Bad for DevOps Collective

Being blunt again not out of anger, but out of clarity: The DevOps Collective looks like they sold out.

All of the wonderful things that the DevOps Collective plans to do with this money aside, they’re a community-oriented non-profit who just silently sold an enormously valuable community resource to a for-profit company. This is a bad look. As a non-profit whose existential purpose is to contribute, and help others to contribute to the PowerShell and DevOps community, this is objectively a huge step in another direction.

Everybody who’s dug into this issue totally understands that recording conference sessions is expensive and complicated. Everybody deeply appreciates the financial and human resources that the DevOps Collective puts into making these recordings available to the community. Everybody also wants the DevOps Collective to continue making that investment and make sure these sessions are donated to the community. This is a “stuck between a rock and a hard place” scenario for the DevOps Collective, but right now the general public opinion appears to be that they chose wrong.

Bad for Speakers

Many speakers feel blind-sided by this. It appears that details of this arrangement were still being finalized while the CFP was going on, and so the first speakers heard of the recordings being put behind a pay wall was when they found out their session proposal was accepted and they read the speaker agreement. Some speakers I’ve spoken to have already decided they are going to retract their proposal, and many more are considering the same. I’m not among those who are considering pulling out of Summit, but I can see where they’re coming from.

Speakers Are Paying To Create Content For A For-Profit Company

My biggest issue with this whole thing is that speakers are being taken advantage of. When you agree to speak at Summit, you agree to cover your own travel and expenses. You are compensated with a free ticket to Summit, and that’s it. Possibly there will be a modest stipend, but that’s not part of the agreement. This adds up to a scenario where a speaker (or their employer) is paying thousands of dollars in travel, lodging, meals, time away from the office, etc. in exchange for the opportunity to speak at Summit, engage with other attendees, and to enjoy all the other sessions.

In this new situation, however, now a speaker or their employer is paying thousands of dollars for all of that, but a significant product of that investment is now a source of revenue for a for-profit company instead of a donation made to the technical community. Speakers are effectively paying to create content for a for-profit company. I can’t express clearly or emphatically enough how much I dislike this.

Since I work at Microsoft, it costs me nothing to speak at Summit. I live in the area, my employer gives me the time to attend the event. I suppose the expense I take on is some of my free time goes towards making my sessions as good as I can. In the past I’ve taken vacation time, and paid all my own expenses in order to speak at Summit. I did that with the understanding that me speaking at Summit was good for me (exposure, personal brand), good for the conference (unique content), and good for the entire community (watch my content). Nobody profited off my expenditures.

Bad for Community

Obviously, the general public got to view these recordings for free and now they have to pay. Pretty cut and dry here. I feel like I’ve sold the quality of the technical content shared at Summit already, and won’t re-hash it here.

The Bottom Line

There is good and bad for everybody here. More money being involved means that the event itself benefits, and so do the people directly connected to the event, including the speakers and attendees. The people who are victims of this situation are the general community who’s not attending Summit, and the speakers. That’s right, speakers both win and lose here.

It’s a complicated scenario with a lot of different perspectives to consider. I don’t envy anybody involved in these decisions. While I think the DevOps Collective made a mistake, there is a chance for Pluralsight to “do the right thing” and make the recordings public on their site (perhaps after a period of time), and “donate” them to the technical community.

If you have thoughts on this new agreement that Pluralsight and the DevOps Collective have entered into regarding the 2020 recordings of the Summit sessions, especially if you like the idea of making the recordings public after a period of time of paid access, tweet at @PSHSummit, @PSHOrg, @Pluralsight, and use the hashtags #PSHDevOps and #PSHSummit. Tag me too, @MrThomasRayner. Please also join us in the Conferences channel on the unified/bridged PowerShell community Discord and Slack.

Read More

A Weekend At A High Schoolers Hackathon

Whoa, it’s been a while since I got a post out. Between my slower posting schedule and the fact that I moved from WordPress to GitHub pages (and changed the domain), it’s a miracle I have any SEO points left at all! Anyway, that’s not really the point of this post. The point of this post is to talk about the cool event I attended recently in Columbus, Ohio. Spoiler alert: I was blown away.

If you like (or don’t like) the story time format blog posts, please let me know my tweeting me: @MrThomasRayner. A lot of what I work on during my day job is stuff I don’t feel comfortable blogging about, and since that used to be my main source of blog post ideas, the story format might help get me writing again.

About the event

Hack OHI/O is a program which is self-described as fostering a tech culture at Ohio State University and the communities nearby. They’ve been around since 2013 and run a bunch of events. One of the people in my management chain is an OSU grad and worked to coordinate some sponsorship money from Microsoft in support of this awesome program, and that’s how I ended up getting to attend High School I/O event to provide some mentorship to the students participating and judging the projects at the end of the day.

The High School I/O event I went to, as the name suggests, is for high school aged kids. A lot of the other mentors were actually OSU Comp Sci students who volunteered their time. There were also folks there from Cover My Meds, who hosted us, and a handful of other local organizations. I think I won the “traveled the furthest to be here” award, having flown from the Seattle area. So, let me tell you the story of my weekend.

Friday

I made my way to SeaTac on Friday morning to head out to Columbus. The only direct flight had a 9:45 AM departure, so no sleeping in for me! I actually ended up waking up at the same time I’d wake up any other day. I recorded all the demos for my Writing Compiled PowerShell Cmdlets session for the PowerShell & DevOps Global Summit on the 4 hour flight there. When I landed, I finally got to meet @StevieCoaster who generously picked me up from the airport. We inadvertently attended three different breweries and taphouses (one closed for an event, one super full, and one that got us in). We sampled their wares, had dinner, exchanged stickers and swag, and gossiped relentlessly about the PowerShell community. It was awesome to meet Stevie, and I can’t wait to see him and a bunch of other folks again at PshSummit in April.

Saturday

Saturday was the day of the hackathon, and it was a full day. With the time difference, it felt like a 3:30 AM wake up to start things off. That’s nothing a little caffeine and enthusiasm can’t fix, so I got my act together and headed over to the event. I was a little early and stashed myself away in a conference room while the organizers and hosts finished setting up. I’m not sure why I thought it might start on time 😄. I chatted with some of the hosts who were Cover My Meds employees, and some of the other organizers and mentors who were mostly OSU Comp Sci students.

Mentoring the mentors

A lot of the students have internships lined up this summer for companies in the Seattle area, and were both excited and nervous about being that far from their homes in Ohio. As someone who just moved from his hometown to the Seattle area last year, I had a lot of empathy for them, but luckily, I could attest to how much I love being in Washington, and gave them some encouragement about that aspect of their internships. A couple of the students commented that they were going to miss college, where they had a stable group of friends, in familiar surroundings. They were excited for the next steps in their journey, but some had a gut feeling that they might not stay in touch in the long term, which I think is totally normal. Having gone through college and worked for a few different employers, I can definitely confirm that the folks you see every day at school or work aren’t always the people you end up forming long term relationships with after one of you leaves for something new. There are some that do, though, and I think those are true life-long friends. In my opinion, you’re lucky to have more than a few of those.

Token pose with something with the event logo on it

So, how about the hackathon itself?

The high schoolers who came had a huge array of ideas. There were IoT, mobile, web, desktop, and lots of other projects going on. There were games, self-help websites, social networks, accessibility tools, and tons more. This event didn’t have a particular theme (like an upcoming Hack OHI/O event for college aged students that focuses on AI, and others), so the students were pretty much left to their own devices. I had prepared a few suggestions for projects for groups that didn’t know what to do, but nobody I talked to had any trouble thinking of something to work on. The diversity among attendees was just as great to see. Students came from all kinds of different socioeconomic and ethnic backgrounds, as well as different genders. The value of the unique perspectives that come from having a team of diverse individuals is something that I’ve known for a long time, and something that Microsoft (my employer) very strongly believes in. I didn’t know what to expect at this event, but was refreshed and excited to know that programs like Hack OHI/O are reaching such a wide group of people.

Hacking for good

The humanitarian focus of a majority of projects was something I didn’t expect. I figured I’d see a lot of games and mobile apps. While I did see a lot of games and mobile apps, they were largely gamifying different aspects of self-help. Here’s a brief (absolutely not complete) list of projects I talked to students about:

  • An IoT boat-drone that carries healthy plants around ponds and lakes to help clean up pollution
  • A mobile app that surveys biometric and environmental area, correlated with a survey about your emotional state to help you identify what kind of places you find least stressful
  • A site dedicated to helping people with PTSD avoid their triggers via crowd sourcing
  • A mobile app that encourages kids with braces to wear their elastic bands by having them take a selfie with their bands on, and if their bands are on, awarding them points that go towards gift cards
  • An IoT deadbolt that opens if it can identify the person by their knock
  • A mobile app that helps hackathon attendees find events and projects to work on, as well as people with complimentary skillsets
  • A site that helps people get the right amount of sleep
  • A site that helps people with hearing problems properly adjust their volume and EQ settings
  • A tool to help avoid “too much screen time” related headaches

That’s just a small list of what I can remember off the top of my head. There were a bunch more groups who were all working on equally awesome projects. I was blown away to see high school aged people focused on using technology for good so early in their tech adventures. I’m an enormous believer in using tech to help people, and it was amazing to see the bright, creative minds of tomorrow focused on making that tomorrow better not just for themselves, but for each other.

These kids weren’t messing around

You might wonder how much a team of 2 - 4 high school students could deliver in 8 hours of hacking time, like I was. To be certain, nothing that came out of this hackathon is ready for VC funding, but that’s not really the point, eh? Between the group of girls who “learned Unity in a day”, to the group of kids who “discovered how easy it is to get started with Firebase”, and the group who “was surprised how inexpensive it could be to get 4 Raspberry Pis on Amazon”, it immediately became clear that these youngsters who’ve grown up immersed in technology have no fear about diving into something new and quickly learning it. That kind of curiosity and thirst for learning is obviously going to be a huge advantage as these people grow up and enter the work force. For real, watch out for Gen Z.

I saw groups building client/server apps in Java, writing Swift, all kinds of JavaScript - including Typescript, C#, C++, C, Python, Rust, and more that I’m just not remembering right now. Sure, there was some maturity missing from the dev process (poorly handled secrets everywhere, not a lot of CI/CD), but to expect enterprise dev level maturity from kids who haven’t finished high school yet is obviously ridiculous. That kind of thing comes later.

I asked some of the groups how similar their projects were to other things they’ve worked on outside of this hackathon and was floored at the different apps, sites, tools, and games I was immediately shown. Their passion is contagious, and it was a huge privilege to get to chat with them about life, technology, and Microsoft all day.

Chatting with some of the groups about their project

Eventually, we crowned some winners:

  • Best Designed Hack: A group of ladies who “learned Unity in a day” to make a game with a humanitarian focus
  • Best Unfinished Idea: A group of guys who made that IoT deadbolt that recognized knocks of different individuals
  • Most Original Hack: A group who made a site that helps people self-diagnose minor illnesses and remedy them at home
  • Greatest Social Impact: These guys made an app that models your stress/happiness level based off environmental data
  • Most Technically Difficult: The group who made a mobile app that connected hackathon attendees with hackathons and ideas for projects

The winners and runners up got their prizes (some gift cards and cool tiny Microsoft Azure branded bluetooth speakers), and everyone said their goodbyes and made their way home. I headed back to my hotel to crash after a huge 14 hour day of fun.

Sunday

After a much needed sleeping in, I got myself together, checked out of my hotel, and found my way to the airport. There’s not much to report about Sunday, other than I’m currently writing, and am about to publish this post from the plane on my way home while the experience is fresh, and I can’t wait to be back in Redmond! It was fun visiting Columbus, but man is it always great to come home to the PNW 😁.

Closing thoughts

It was so cool to get to attend this event and connect with the students who were participating (and mentoring). It was refreshing and invigorating to witness their enthusiasm and passion, as well as their focus on contributing positively to their community. I’ll certainly be doing my best to bring that energy back to Redmond and my work at Microsoft.

Thank you to Hack OHI/O for having me, and to Microsoft for sponsoring the event and sending me. I hope my teammates who attend their next few events have an experience that is rewarding as mine was.

Read More

Messing Around With PowerShell 6

Starting with PowerShell 6, the whole language is open source. You’ve probably heard about that already. But if you don’t think of yourself as a “developer”, then it’s possible that the most you’ve ever taken advantage of that fact is creating a GitHub issue or commenting on a PR. Today, follow along with me, and we’ll change that.

If you’re at all comfortable writing PowerShell, you’ll be able to pick up C# with relative convenience. To be fair, dabbling with editing PowerShell is pretty far removed from a “Hello world” exercise, but maybe it’ll be fun enough to motivate you to learn more. The deeper you get into PowerShell, the more learning some C# might help you.

So, let’s get at it. First, clone the repository with git clone https://github.com/powershell/powershell.git or fork it and clone your fork. Check out the contribution guide for information on how to prep your environment and build your own pwsh.exe from the source.

Then, open the PowerShell.sln file in Visual Studio, or open up the project in VS Code. I tend to prefer “full blown” Visual Studio for writing C#, because it’s what I’m used to, but VS Code is perfectly fine, with a couple extensions added on (which are recommended as soon as you start looking at C# stuff).

So, now, what do you want to do? I like to start simple. When you first launch pwsh.exe there’s a banner message that is displayed, which at the time of this writing reads something like:

PowerShell 6.1.0
Copyright (c) Microsoft Corporation. All rights reserved.

https://aka.ms/pscore6-docs
Type 'help' to get help.

Let’s make that more interesting. Probably, there’s a string somewhere that we can just edit and make it say whatever we want.

If you expand powershell-win-core, you’ll see Program.cs which is probably what gets run when you fire up pwsh.exe in Windows. At least, when I started poking around, that was my guess. It seems pretty simple. It returns an UnmanagedPSEntry.

Look at the definition of the UnmanagedPSEntry class (by right clicking on it and selecting “Go to definition”), and you can read the code for the Start() method that is called in Program.cs. Eventually, you’ll get to around line 70 in the file that defines UnmanagedPSEntry (at the time of this writing) where a variable named banner, and another one named formattedBanner are assigned a value. The banner value seems to come from another class called ManagedEntranceStrings so maybe let’s take a look at that. That name alone sort of sounds like the type of thing we want to mess with right now.

The ManagedEntranceStrings class which, as the comment in the file suggests, returns the cached ResourceManager instance used by the class. It looks like it’s located at "Microsoft.PowerShell.ConsoleHost.resources.ManagedEntranceStrings". So… let’s go peek in there. It’s a resx file.

Aha! Looks like maybe we found it. There’s a ShellBannerNonWindowsPowerShell item that looks like we can fudge around with. I’m just going to make it a little more casual.

Save everything, follow the above linked directions to build PowerShell, and launch the pwsh.exe that it created. You should see your new message. Mine looks like this.

Welcome to PowerShell 6.1.0. If you're stuck, type 'help', otherwise, check out the docs.

It’s the little things that add the most joy to life. In all honesty though, explore a bit and you’ll start to learn about how PowerShell really works, and next time you see something that doesn’t work like you think it should, you’ll have more power to do something about it.

Read More

Learn PowerShell With PSKoans

If you’ve found your way to this blog, you probably already have a reasonable understanding of basic PowerShell concepts (or maybe that’s a foolish assumption). But, how about all your coworkers? And for you, you’re probably not done learning yet. There are plenty of ways to learn PowerShell - books, online courses, stealing code from blogs - but in my opinion, the best way to learn PowerShell is by writing PowerShell.

Normally, “learning by doing” when it comes to PowerShell involves just writing scripts and modules to satisfy the requirements of your job or side project. This is great, but you’ll end up not exploring certain areas of the language, and sometimes it’s nice to be able to know how to do your job before you need to start doing it… so let me introduce you to PSKoans.

PSKoans is a PowerShell module written Joel Sallow, with the purpose of helping people learn PowerShell. The readme.md on his GitHub page for PSKoans does a good job of explaining what’s going on, and how to use the tool, so I’m not just going to reproduce that. Instead, I’m just going to share my initial thoughts and experience.

Like me, Joel is active in the PowerShell Slack/Discord/IRC channel where people often come to get help with - you guessed it - PowerShell. After seeing stuff about PSKoans discussed for a while, and as I realized that I needed to come up with a good way to help my team at work improve their PowerShell chops, I figured I should give it a try and see if PSKoans would be suitable for use within my team.

Between a few interruptions, it took me a couple hours to work my way through the Foundations set of koans, which cover basics such as working with collections, looping, different operators, and conditionals. In my opinion, it’s a pretty darn complete set of foundations. The files you work through themselves are, for the most part, really clear about what you need to be doing, and what you should be learning. If there’s any spots that aren’t clear, clarity was soon restored by looking at the output of measure-karma which is an included function for tracking your progress There was one particular spot where I stumbled upon a koan that wasn’t quite right, so I opened an issue for it, and Joel fixed it the same day. Now that’s service.

I’m going to go ahead and use PSKoans with my team as a practical learning tool. Some people prefer books or online courses, and that’s awesome. There’s plenty of those already. What is awesome to see is people like Joel giving back to the community by making learning systems like PSKoans that take another approach and offer a hands-on learning that doesn’t have to take place in your production environments.


Full disclosure: I’ve only made my way through the foundations, since I don’t want to get too far ahead of my team (we might work through some in a workshop setting). Foundations make up more than half the points available at the time of this writing, so it’s possible that more sophisticated subjects could use some more coverage. Joel’s committed to adding more koans, though, so it’s worth watching this active project.

Read More

The Six Commands You NEED To Know To Work With Git

So let me say first, there are WAY more than 6 git commands you should know if you’re working with a project that uses git. However, when you’re first getting started, there are 6 git commands that you can’t get away without knowing. Here they are.

This “guide” assumes that you’re working with GitHub or Azure DevOps Repos or something and are creating your repos through the web-based GUI they offer, or the repos are already created. You’re on your own there.

This is not a replacement for learning git. This is just a cheat sheet for beginners.

First, you need to clone the repo. This takes a copy of what’s in source control and puts it on your local system.

git clone https://github.com/thomasrayner/git-demo.git

If you’ve already cloned the repository before, and just want to retrieve new changes, you can pull them.

git pull

Next, you should probably be doing your work in a separate branch in order to avoid stepping on the toes of your coworkers. You need to checkout a new branch (-b to create it). You can also use checkout to move between branches on your local system.

git checkout -b new-branch

# or if you wanted to move from a branch you created back to the master branch
git checkout master

Now, make your changes. The demo repository I’m using to validate these commands is empty, so I’m just going to add something to a readme file using PowerShell. This is not one of the commands you need to learn.

"Just a little somethin' somethin'" | Out-File ".\readme.md"

You can check your status now (and at any point). This will even tell you which git commands you probably want next.

git status

# returns something like this
On branch new-branch

No commits yet

Untracked files:
  (use "git add <file>..." to include in what will be committed)

        readme.md

nothing added to commit but untracked files present (use "git add" to track)

Next, to stage our changes, we’ll use add like status suggested. Flags and parameters in git are case sensitive, and I’m using the -A flag to stage All my unstaged changes.

git add -A

Once my change is staged, I need to commit it. -m is the parameter for a commit message, otherwise you’ll be prompted for one. These messages are critical for tracking changes to files and projects, so make them good.

git commit -m "Added readme.md"

Once the change is commited, it’s time to push it back to GitHub (or Azure DevOps, or whatever else you’re using).

git push

But wait, if you haven’t pushed since creating your branch, you’ll get an error! Mine looked like this.

fatal: The current branch new-branch has no upstream branch.
To push the current branch and set the remote as upstream, use

    git push --set-upstream origin new-branch

Git very helpfully shares the command that we need to use the first time we push a new branch. You don’t need to do this every time, but when you perform a git checkout -b <branch name> then the first time you push it back to your source control, you need to include the --set-upstream origin <branch name> bit. After your source control knows about the branch, you can just do the normal git push command.

Now, you can go to GitHub or Azure DevOps and make a pull request to get the changes from your branch pulled into master.

That’s it. At the very least, you need to know clone, pull, checkout, status, commit and push if you’re going to work with git. Again, this is no replacement for more thorough learning and practice. There are a couple great courses on Pluralsight (click the link for my courses at the top of the page and search for “git”) and other text-based resources available for you when you’re ready to learn more. Until then, try not to get into too much trouble!

Read More

New Blog, Same Content

This is really just an obligatory post to announce that I’ve moved my blogging habits from workingsysadmin.com to this URL, thomasrayner.ca. Why? Well I’ll tell you why.

Previously, my blog was a Wordpress install hosted in Azure. Before that, it was a Wordpress install hosted in some shared hosting I’ve had since before there was time. Now, it’s hosted on GitHub Pages (proof). I’m still working out my workflow, and there’s lots of bugs I’m still working out from the conversion, so if you see broken images or links, please file an issue on the GitHub repo.

So why move? There’s really three reasons.

1. I don’t really like Wordpress that much

Wordpress was great when I was getting into blogging because it had nice editing tools and a vast array of plugins. In a previous life I used to be a web developer, and I liked that there was a robust theme ecosystem that meant I didn’t have to do any web development whatsoever.

The issue, became, that the plugin market is full of janky, poorly secured stuff, Wordpress itself has an enormous attack surface, and honestly it’s overkill for my needs. I just wanted a blog. Jekyll on GitHub Pages has got a way smaller attack surface, and it’s more comfortable for me to work with given my current workflow for non-blogging related activities. I’ve committed to not over-doing it when it comes to my theme, so it will likely stay pretty standard. If you know of a good dark theme, let me know.

2. I wanted to put it somewhere other than Azure

I love Azure. Let’s not get that confused. So why did I want to get my blog off Azure? Well, I was paying for it with a credit that I get for having an MSDN subscription and I want to use those credits for something else. I love GitHub too, so I’m personally filing this as a win/win.

All in, it’s been a bit of a hassle, but so far it’s been worth it. I need to get my new workflow sorted out a bit better. Scheduling posts to be published on future dates was a big reason I stayed on Wordpress as long as I did, but I should be able to get that sorted out here. This particular post is not scheduled, and is just going live when I’m done writing and revewing it, and decide to push it.

3. It was time for a new URL

The “Working Sysadmin” branding was fine when I was a “sysadmin”. The theme of the blog was, and continues to be, posts about the things I’m figuring out and working on at work. The trouble is, my career and role has changed a bit and I really don’t qualify myself as a “sysadmin” any more. In fact, I think the distinction between “dev” and “ops” is harmful and that we’d be better off if we all thought of ourselves as technologists, anyway. The new URL, thomasrayner.ca, captures things a little better. I don’t currently plan on having any guest posters, and I’m Canadian. So, self-branding seems to make more sense. This way, the blog can freely be about whatever is on my plate, without worrying about the “brand” of the blog. Spoiler: It’s still going to be a LOT of PowerShell and automation related subjects.

So, now it’s time for me to keep fixing broken links, updating RSS feeds, and all that kind of stuff. I’ve kind of just decided to wave goodbye to all the SEO I had built up with the old domain, because that’s not really what I do this for. Plus, that can be rebuilt.

Thanks for reading!

Read More

Editing An Azure DevOps Build Definition From Within The Build

It’s been a little while since I’ve managed to get a blog post out! Not to worry, though, as I’ve been nice and busy. One of the things I’ve been working on lately is writing a VSTS- I mean Azure DevOps extension.

The extension I’m working on will, among other things, need to update the build definition of the build that it’s currently building. Why? Because I’m incrementing a version number that’s stored in a build variable, which is part of the build definition. Here’s how I’m doing it.

First, you need to make sure that you grant the build permissions to access the OAuth key. This is under the additional options section of the agent job configuration. Then it’s time for some code.

$personalAccessToken = $env:system_accesstoken
$headers = @{"Authorization" = "Bearer $personalAccessToken"}
$headers.Add("Content-Type", "application/json")
        
$getBuildUri = "$($env:SYSTEM_TEAMFOUNDATIONSERVERURI)$($env:SYSTEM_TEAMPROJECT)/_apis/build/builds/$($env:BUILD_BUILDID)?api-version=4.1"
$getBuildResponse = Invoke-RestMethod -Uri $getBuildUri -Headers $headers

$getVarsUri = "$($env:SYSTEM_TEAMFOUNDATIONSERVERURI)$($env:SYSTEM_TEAMPROJECT)/_apis/build/definitions/$($getBuildResponse.Definition.Id)?api-version=4.1"
$getVarsResponse = Invoke-RestMethod -Uri $getVarsUri -Headers $headers

First, I’m retrieving the access token, and building the authorization and content-type elements of the headers that I’m going to use to interact with the Azure DevOps API. Then, I’m going to get the build definition for the build that’s currently running. After that, I’ll get the definition for that build.

Then, I need to edit the variable that I want to manipulate, and write it back to Azure DevOps.

$getVarsResponse.variables.$Variable.value = $newValue
    
$putData = $($getVarsResponse | ConvertTo-Json -Compress -Depth 10)
$null = Invoke-RestMethod -Uri $getVarsUri -Method PUT -Body $putData -Headers $headers

In this example, $Variable is the name of the build variable whose value I want to edit. Then I’ll convert the PowerShell object back to JSON, and put it back in Azure DevOps.

Once you do this, it takes a few seconds for what you did in via the API to be reflected in the web portal you access through the browser. Sometimes you have to wait a minute or two to make sure your change actually took place.

A big thanks to Chris “Halbarad” Gardner for helping me negotiate some challenges as I worked on this.

 

Read More

PowerHour - PowerShell Lightning Demos

If you haven’t been to the PowerShell & DevOps Global Summit, let me tell you that the lightning demos are an ultra fun and informative part of the conference. It’s so cool to see what other people are doing with PowerShell that you’d never think of because it’s not what you’re used to working on. I love the fact that PowerShell is so many places, with so much flexibility, that it creates countless opportunities for interesting, meaningful projects.

PowerHour is like a virtual PowerShell user group that meets periodically to do lightning demos. What’s a lightning demo? It’s a 10 minute live demo where you show off something neat that you’re working on or are proud of. It’s super informal, and very free form.

I highly recommend checking them out, and participating if you’re comfortable with that. Check out the PSPowerHour GitHub for more information.

Read More

Find Me At Techmentor For A Free Sticker

Are you going to be at Techmentor Redmond next week? I will be! You can catch me at my workshop on Monday and learn some Master Powershell tricks, or at my session on Tuesday to learn to write code that doesn’t suck. I’ll also be hanging around the rest of the conference, dinner events, and other people’s sessions.

I’d love to meet you! Say hi and I’ll give you a sticker (while supplies last).

Read More

Working With Azure Automation From The PowerShell AzureRM CLI

Back in March, I had the opportunity to link up with Microsoft Cloud Advocate Damian Brady and record an episode of The DevOps Lab. We chatted a little bit about the MVP Summit and being an MVP (which I am no longer, since I’ve joined Microsoft as an employee), and then get down to business administering Azure Automation purely through the AzureRM PowerShell module.

Check out the recording, below!

https://www.youtube.com/watch?v=qbvss7VuezA

Read More

Finding Out When A PowerShell Cmdlet Was Introduced

In the PowerShell Slack (invite yourself at bit.ly/psslack), there was a very brief debate over when the Expand-Archive cmdlet was introduced to PowerShell. This is absolutely information that can be found online, but there’s a few different ways.

Some cmdlets have this information built into the help, some share this information in the online docs. Since the core cmdlets documentation are open sourced and on GitHub, however, you can go straight to the source and quickly answer this question for yourself.

If you go to https://github.com/powershell/powershell-docs, you’ll find all the documentation for the core PowerShell cmdlets. In the Reference folder, you’ll see documentation for all the currently supported versions of PowerShell (back to 3.0). The docs for older cmdlets are in there too, but typically you’re going to be looking for if a cmdlet was introduced in version 4 or 5, in my experience.

Click on the Find File button in GitHub, and you’ll be presented with a search screen.

From there, type in the name of the cmdlet, and the search will start to populate. Let’s see when the Expand-Archive cmdlet was introduced.

You can see that this core cmdlet is in the docs for versions 5.0, 5.1 and 6. That means that we can assume this cmdlet was introduced in PowerShell version 5.

Read More

The PowerShell Conference Book

If you’re active on social media and follow things about PowerShell, you’ve probably already seen some information about the PowerShell Conference Book. It’s a community effort that was created to support the PowerShell.org OnRamp Scholarship Program.

I had the distinct honor of being asked to participate in this book, which has chapters from dozens of the brightest minds in the PowerShell community. My chapter is about creating custom PSScriptAnalyzer rules.

If you follow my work at all, you might realize that sounds familiar. Isn’t that the session that I did at PowerShell and DevOps Global Summit 2018? Well yes it is. The point of this book is to basically be a print-format conference. Obviously the networking and socializing part is a little bit lacking (it’s a book, give us a break) but the “attend different talks and learn about a crazy variety of awesome topics” part is more than well covered. Every chapter is written by a subject matter expert, independent of the other chapters - kind of like how talks at a conference are given.

If you’ve never been to the PowerShell and DevOps Global Summit or the European or Asian equivalent conferences, then you’ll get a bit of a glimpse of the world class content and knowledge that gets shared at these things. Either way, all of the proceeds from your purchase of this book goes directly towards funding the OnRamp Scholarship Program which is a full-ride to the next PowerShell and DevOps Global Summit for people who are underrepresented in IT, and people who are just getting started in their careers and could use a boost. It’s going to make a real difference in some people’s lives, and is incredibly worthwhile.

Basically what I’m trying to say is this: buy the book. You’re virtually guaranteed to learn something that will have this book paying for itself real fast. Plus, you’re supporting an awesome cause.

Read More

I was re-awarded as a Microsoft MVP, but I'm leaving the program

On July 1, I was notified that I was I was re-awarded as a Microsoft Most Valuable Professional (MVP)! Being an MVP is an enormous privilege, and has been a huge benefit to me professionally. If you’re not familiar with the MVP Program, it’s basically an award given to independent technologists who share technical knowledge with the community. That might mean blogging, public speaking, creating videos, being active on social media, answering questions on technical forums, or lots of other things.

In addition to a cool glass trophy, being an MVP comes with a bunch of other perks like an MSDN subscription, an O365 license, Azure credits, and other assorted swag and gifts. The biggest benefit by far, though, is access to NDA-protected mailing lists, and the networking opportunities to connect with other MVPs and full time Microsoft employees.

This is my fourth MVP award, and since April 2015, I’ve had the distinct pleasure of getting to know the most incredible people, mentor others, be mentored, influence the products Microsoft makes, and share thousands of hours of effort in the form of books, blog posts, public speaking, and other ways of giving back to the community that’s helped me so much. Through being an MVP, I’ve met great people who have helped me in my career tremendously. I’m grateful to all of them.

On that note, as of July 9, 2018, I won't be eligible for the MVP program any more and therefore will have to give up my status as an MVP.

One of the conditions for being a Microsoft MVP is that you aren’t a Microsoft employee. This spring, I accepted a position at Microsoft as a Senior Security Service Engineer, and will be starting on Monday, July 9! I’ll be joining an immensely talented team doing fascinating work, applying my skills in the area of scripting and automation, and helping guide their growing DevOps habits.

I couldn’t possibly be more excited.

As a small note, I’ll be relocating to the Seattle area this summer, and getting my feet under me in this new position, so the weekly streak of blog posts I’ve been able to uphold for over a year is likely to be interrupted. I’ll still be posting, but perhaps not quite as frequently. Just because I’m not going to be an MVP any more doesn’t mean I’m not still committed to sharing information and helping the technical community any way I can.

Read More

Quick Tip - See All The Tab-Completion Options At Once In The PowerShell Console

If you’re used to working in VS Code or the PowerShell ISE, you’ve undoubtedly enjoyed intellisense which is the feature that shows you all the tab completion options at once. That functionality is really handy, but what if you’re in the PowerShell console? The little overlayed windows don’t pop up there with your completion options. You can still tab through until you find what you want, but it’s not the same.

Don’t worry, there’s a PSReadline feature that will save you here.

Start typing something, like a cmdlet, and then instead of tab completing it, use Ctrl + Space to see the different options available to you. You can navigate through the different options using the arrow keys. Check out this gif of this feature in action.

Super handy. I love this feature for hunting through different parameters for a cmdlet.

Read More

Quick Tip - Split A PowerShell Collection Into Two Arrays

Did you know that you can use Where-Object to split a collection into two arrays? Like, if you had an array containing the numbers 1 to 10, you could split it into one array of even numbers, and another array of odd numbers? It’s pretty cool. Thanks Herb Meyerowitz for this tip!

As you can tell from Herb’s comment in this screenshot, it’s actually the .where() method that is relatively new that we’re using to split collections this way. The syntax is kind of atypical, so let’s break it down.

$a, $b = (1..5).where({$_ % 2}, 'split')
  • $a, $b = 
    • This part is creating two variables, named a and b that will be used to contain the output of our splitting activity.
  • (1..5)
    • This just creates an array of the numbers 1 through 5.
  • .where( )
    • This is a method that comes with PowerShell 5 (I'm pretty sure) which works mostly like the Where-Object cmdlet that you're used to. Most people just use it to filter collections on a filterscript (stay tuned) but it also takes other arguments.
  • {$_ % 2}
    • This is the filterscript, or basically the criteria we're using to split up our collection. It will effectively create a true and a false list like the condition in an if statement. The percent sign is the modulus operator and will determine if the number is divisible by two without a remainder or not.
  • 'split'
    • This is the mode. By default, the collection is filtered using the filterscript like when using Where-Object and only the objects matching the condition are returned. But that's not the only mode! We're going to use the split mode to break it into two collections. Maybe another blog post will come out on some of the other modes.

That’s it!

Read More

Sneaky PowerShell Trick - Run Completely Without A Window

Maybe you have a login script or something else that’s written in PowerShell that you want to run without having any kind of window pop up - not even a blank one. There’s a few ways to do this, but my current favorite is to wrap it in C#. Thanks Mark Kraus for this tip!

Fire up Visual Studio, and create a new C# console application. Right click and change the properties of the project so that the output type is Window App… and then get this… don’t make a window.

From there, you’ll need to add a reference for PowerShellStandard.Library so you can do PowerShell-y business in your C# application. After that, you can make a method look like this.

static void Main(string[] args)
{
    var powershell = PowerShell.Create();
    powershell.AddScript(@"
Get-ChildItem -Path c:\temp | out-file c:\temp\shh.txt
");
    var handler = powershell.BeginInvoke();
    while (!handler.IsCompleted)
        Thread.Sleep(200);
    powershell.EndInvoke(handler);
    powershell.Dispose();
}

It’s not super complicated code. Line 3 creates a new PowerShell object so we can, you know, run PowerShell code. On Line 4 - 6, we’re adding the script that’s going to be executed. You could retrieve the code from a file, but I’ve just stuck a here-string in there. My script is pretty simple, it’s just getting all the items in my c:\temp folder and exporting that output to a file named shh.txt.

After that, I’ve got to actually run my code. That happens on Line 7, and then on Line 8 and 9, we make the thread wait until the code is done executing. Then finally on Line 10 and 11, I’m cleaning up after myself by ending the invocation and disposing of my PowerShell object.

You can build this, and then look in the folder for your app, in the bin\Debug folder, you’ll find an .exe file which you can run. Double click it and you shouldn’t see any window pop up, no console, no XAML window, no nothing, but your PowerShell script will run. If you made yours look like mine, go check for that file that it should have created.

Now when your login script runs, your users won’t be able to close it before it finishes!

Read More

A Year Of Weekly Blog Posts - Lessons Learned

With this post, I’ve got a new post up on this blog every Wednesday morning for a year. I’m pretty proud of that! There are certainly more prolific bloggers out there, especially in this space, but for me, this is quite the accomplishment. This is weekly consecutive blog post number 53.

In celebration of getting through a full year of weekly blog posts on topics of PowerShell, DevOps, automation and IT strategy, in this post I’ll share some of the lessons I’ve learned. This isn’t a big list of everything you need to know to blog, or even things that might work for you, but just things I’ve learned about blogging over the last year.

Not every blog post needs to be a textbook

A lot of people get caught up on what is “blog worthy” and dismiss post ideas because they aren’t long enough, or because the topic has been covered before. Long posts, multi-part series, and other enormous posts are totally awesome. They also tend to be pretty niche and time consuming. There’s nothing wrong with a shorter post - they help people too! There’s also nothing wrong with adding your own unique perspective on a topic that has already been covered by other bloggers, since your commentary can sometimes be just as valuable as the technical content.

Post ideas are everywhere

After a few months of weekly posts, I realized that my daily scope of work didn’t change enough for me to be constantly blogging about what I’m working on. Therefore, I had to look for some new sources of inspiration. I joined the PowerShell Slack channel, paid more attention to Twitter, and started remembering questions my coworkers had and used those as material for my blog. Not only do you get to help someone in the moment, but you get to help even more people through that blog post being up forever (or until you forget to renew your domain or web hosting).

Post ideas come in bursts

When I first committed to weekly blog posts, I had a ton of ideas written down on a OneNote page. I spent a couple weekend afternoons writing about 20 posts, and scheduled them to go public every Wednesday morning. That was awesome, and it meant that I didn’t have to think of any new posts for a few months. When I had about 5 posts left, inspiration struck again, and I wrote another big batch of posts. In fact, I’m writing this exact post about a month before it will go up because this is when I felt inspired to write it. Don’t worry, if I learn anything new in the next month about blogging, I’ll update the post.

Scheduling posts in advance, and having a hopper full of posts ready to go is mandatory if you’re going to commit to a weekly post schedule. Eventually you’ll want to take a week off, you won’t be able to think of a post, or you just won’t have time. In these situations, you’ll be glad that you have a bunch of posts queued up ready to go.

Not every post has to be "on brand"

Over the Christmas season, I was in Mexico, and felt the need to queue up a bunch of posts. The holidays are a time when most of the people who read my blog are away from work, so I felt like I had a little freedom regarding which topics I posted about. Since I didn’t have a lot of PowerShell or DevOps topics ready to go, I posted about what I had been doing in some of my free time, practicing some of my pentesting skills on HackTheBox.eu. They’re certainly not my most viewed posts, and I didn’t get a lot of interaction on them, but nobody complained either. Don’t be afraid to blog about whatever you’re passionate about in the moment. Just because you run a PowerShell/Exchange/AWS/Football/Checkers/PKI blog doesn’t mean you can’t branch out. Blog for yourself first and others second, and you’ll be much happier.

Read More

New in PowerShell 6 - Positive And Negative Parameter Validation

If you’ve written at least a couple of advanced PowerShell functions, you’re probably no stranger to parameter validation. These are the attributes you attach to parameters to make sure that they match a certain regular expression using [ValidatePattern()], or that when they are plugged into a certain script, that it evaluates to true using [ValidateScript({})]. You’ve probably also used [ValidateRange()] to make sure a number falls between a min and a max value that you specified.

In PowerShell 6, though, there’s something new and cool you can do with ValidateRange. You can specify in a convenient new syntax that the value must be positive or negative.

To do this, you start with a normal ValidateRange attribute, and instead of providing a range of numbers, you just use the word “Positive” or “Negative”, like this.

[ValidateRange('Positive')]$int = 10
[ValidateRange('Negative')]$int = -10

These will both work correctly because we’re assigning a value that works with the validation we’ve specified. Here’s two that will throw errors.

[ValidateRange('Positive')]$int = -10
[ValidateRange('Negative')]$int = 10

Here’s what it looks like in the console.

Neat, right?

Read More

Display All The Claims For A User Visiting Your .NET Core Azure Web App

Regular visitors of this blog are used to seeing PowerShell and DevOps content, and this is a little bit of a divergence since it’s written in C#, and it’s a .NET Core MVC Azure Web App, but if it found itself on my plate, maybe it will find itself on yours. I was tasked with writing an Azure Web App that users would visit, sign into using their Azure Active Directory (ie: “Work or School”) account, to test if their Conditional Access and MFA was configured properly. Once logged in, a little information about the user is displayed.

Here’s how to pop all the claim information for an authenticated user into a Razor Page.

I decided to put the whole thing into an HTML table in order to make it a bit more readable. It’s kind of a challenge to differentiate between the claim name and the value if they aren’t aligned nicely. From there, make sure you’re using System.Security.Claims, and you can write yourself this foreach loop.

&lt;table&gt;
    @foreach (var claim in ((ClaimsIdentity)User.Identity).Claims)
    {
        &lt;tr&gt;
            &lt;td&gt;@claim.Type&lt;/td&gt;
            &lt;td&gt;@claim.Value&lt;/td&gt;
        &lt;/tr&gt;
    }
&lt;/table&gt;

It’s not a big mind blower. This is a .cshtml document, so we can write HTML and mix in some inline C#. Using the ClaimsIdentity class, we can write a foreach loop for each claim in the identity of the currently logged in user. This assumes that the user isn’t logged in more than once (ie: Facebook and Twitter and Azure AD).

Then I’m making a new row in my table for each claim, and separate cells for the claim type, which is the name of the claim, and the claim value.

Nice and concise!

Read More

Script Share - Disable Azure AD MFA Without Wiping User Options

How’s this for a niche topic? If you want to move to Azure AD P2 Conditional Access and have users who are on P1 MFA, then in order to move them over, you have to disable and re-enable MFA on their account - or at least that’s what one PFE told me. The problem is, when you do that, you lose their options like if they prefer to enter a code from the app, receive a text, etc. by default. Wouldn’t it be nice if you could keep that stuff?

Well, you can!

Here’s a PowerShell function I wrote that performs this task. It assumes you’ve already done a Connect-MsolService and logged in successfully.

function Move-MfaSettings {
    &lt;#
    .SYNOPSIS
        Converts a user from classic MFA to modern MFA and retains their settings.
    .DESCRIPTION
        Takes a MSOL user and manipulates their settings to engage modern MFA without overwriting their current preferences.
    .EXAMPLE
        PS&gt; Move-MfaSettings -User $(Get-MsolUser -UserPrincipalName myguy@domain.tld) 
        If a user with the UPN myguy@domain.tld is found, the MFA settings will be updated.
    #&gt;
    [cmdletbinding(SupportsShouldProcess)]
    param (
        [parameter(Mandatory, ValueFromPipeline)]
        #The MSOL user object whose MFA settings are being adjusted
        [Microsoft.Online.Administration.User]$User
    )
    if ($PSCmdlet.ShouldProcess("Converting user $($User.UserPrincipalName) from classic MFA to modern MFA")) {
        $strongAuthMethods = $User.StrongAuthenticationMethods | Select-Object MethodType, IsDefault
        $setStrongAuthMethods = @()
        foreach ($strongAuthMethod in $strongAuthMethods) {
            $add = New-Object -TypeName Microsoft.Online.Administration.StrongAuthenticationMethod
            $add.IsDefault = $strongAuthMethod.IsDefault
            $add.MethodType = $strongAuthMethod.MethodType
            $setStrongAuthMethods += $add
        }
        $null = Set-MsolUser -UserPrincipalName $User.UserPrincipalName -StrongAuthenticationRequirements @() -StrongAuthenticationMethods $setStrongAuthMethods
    }
}

It could probably stand for a better name, but I’ve called it Move-MfaSettings, and it takes one parameter: a MSOL user object. It supports the -WhatIf flag, by implementing SupportsShouldProcess.

On line 18, I’m storing the Strong Authentication Methods of the user object that was passed to the function. All I need out of here is the MethodType and IsDefault properties. This is the option/preference information that we would normally lose by performing this task. We’re going to lose it again, but since we’ve saved it here, we can rebuild and add it back later.

Lines 20 through 25 go through each of the preferences we collected on Line 18 and builds a new StrongAuthenticationMethod object out of them. Then we set the IsDefault and MethodType property for each one, and store the new object in an array.

Finally, on line 26, I’ve used Set-MsolUser to disable the Strong Authentication Requirements, and set the Strong Authentication Methods to the array of those objects we created.

Now you can disable MFA for users while still keeping their settings, which is pretty handy when you’re transitioning to P2 Conditional Access.

Read More

A Crash Course In Building Your Own PSScriptAnalyzer Rules - My PowerShell & DevOps Global Summit Session Recording

I had the pleasure of presenting a session at the PowerShell and DevOps Global Summit in Bellevue in April 2018 and the session recordings went live last week. My session was titled A Crash Course in Building Your Own PSScriptAnalyzer Rules and it’s a pretty fast 45 minutes. I’ve been getting lots of wonderful feedback on it, so if this is something you might be into, please give the recording a watch! It’s easier than you might think.

https://youtu.be/_T8wLsbTWJY

Click here if the embedded video doesn’t work: https://youtu.be/_T8wLsbTWJY

Read More

Forcing A Non-Terminating Error To Be Displayed In PowerShell

In full disclosure, this post contains information that a user experience expert might frown at. I’m not really sure, since I’m not a user experience expert. I do know a lot about PowerShell, however, and that’s really what this post is about.

Say you have users of your scripts and modules who might have their $ErrorActionPreference set to SilentlyContinue or maybe you know for a fact that your code explicitly sets it that way. That’s probably another thing that will make the user experience pros mad but here you are anyway. Let’s just say that your stakeholders FORCED you to do it. What happens if you absolutely need to, have to, must display a non-terminating error, such as those you create with Write-Error? Here’s one option.

You could store the current Error Action Preference in another variable, set the EAP to “stop” or “continue” and then write your error, and then set the EAP back, but there’s a simpler way!

Write-Error, like pretty much anything else, has an -ErrorAction parameter which determines what PowerShell will do if the cmdlet it’s attached to throws an error out. Since Write-Error will, by definition, throw an error every time you run it, the -ErrorAction becomes pretty important if you want to use it here.

Even if the Error Action Preference is set to SilentlyContinue, you can do this…

Write-Error "This is my error" -ErrorAction Continue

… and your error will be written to the screen anyway. Obviously you can use the -ErrorAction parameter on everything else that has one too, for the same effect. Error Action Preference is just there to determine how errors are handled if you don’t specify -ErrorAction.

Read More

Writing Your Own Custom VSCode Snippets

If you’ve seen any of the recent talks from Microsoft employees and MVPs about PowerShell, it’s hard to miss that Visual Studio Code (VS Code/VSCode) is the new hot place to be writing your PowerShell code. VSCode with the PowerShell extension is the current Microsoft-recommended coding environment, whereas it used to be PowerShell ISE. ISE isn’t dead (there are lots of posts on that), it’s just considered to be complete, and all current development effort is focused on VSCode.

Great! Well, one of the things I like in my editor is my own custom snippets. I don’t have very many, but I use the ones I have pretty often. Here’s how to make one in VSCode.

In VSCode, go CTRL + Shift + P to open the command pallet, and search for “Configure User Snippets”. From there, you can do a new global snippets file, or choose the language that the snippets you’re going to write are going to be for. This helps you separate your JavaScript snippets from your PowerShell snippets, and so on. I opened my powershell.json snippets file.

By default, the file includes a description, syntax and even an example of how to make a new snippet. I won’t reproduce it here, but instead I’ll share one of my most-used snippets and describe the different parts and how they work. They’re declared in JSON.

"DateTime pre-pended Write-Verbose": {
    "prefix": "verb",
    "body": [
        "Write-Verbose \"[$(Get-Date -format G)] ${1:message}\"$0"
    ],
    "description": "Prepend datetime for Write-Verbose"
}

The name of my snippet is the first thing that’s indicated, “DateTime pre-pended Write-Verbose”, then the different properties that make up the snippet. The prefix I’ve given it is “verb”, which you’ll see the use for in a moment. Then, the body of the snippet. This one is just a “Write-Verbose” command, with part of the string to be written pre-populated. First, I’m writing the current time and date enclosed in square brackets, and then using the “${1:message}” notation, I’m placing the cursor at the location of that text, highlighting the word “message”. This makes it so when you insert the snippet, the word “message” is already highlighted and you can just start typing your message. Then I have a “$0” at the end so when you hit tab, it takes you to the end of the line, outside the quotation marks of the “Write-Verbose”. Finally, I gave the snippet a basic description.

To use the snippet, go CTRL + Shift + P to open the command pallet, find Insert Snippet, and type the prefix you gave your snippet - mine is “verb”. Then once you hit enter, the body of your snippet will be inserted, and in my case, the word “message” is highlighted, right after the datetime.

That’s all there is to it! If you want to see a bunch of awesome snippets written by the community, check out the Community Snippets page on the VSCode-PowerShell GitHub.

Read More

Lean Coffee

I’ve just got back from the PowerShell and DevOps Global Summit in Bellevue, WA where I had the great pleasure of attending tons of excellent sessions on a bunch of PowerShell and DevOps topics. The main tracks were all recorded (hopefully uploaded soon, will update with link) but the side sessions were not.

I didn’t attend many of the side sessions, but one that I did was Glenn Sarti, who is a dev at Puppet. His session was on Lean Coffee, which I think is my new favorite format for informal meetings.

Lean Coffee is held and characterized as follows:

  • There is a timer, a person who is in charge of keeping things moving and enforcing the timing rules that follow.
  • Start by spending 5 minutes to gather topics. This is anything that people want to talk about. Nothing is off limits.
  • Everyone voted on which they want to talk about first. Everyone has two votes which can be cast for whichever topics the voter wants to do first.
  • The timer tallies the votes and the group starts talking about whichever item was highest votes.
  • Every two minutes timer interrupts the discussion and polls for if you want to keep talking about the current topic or not. This is just a quick thumbs up or down. Everyone votes.
  • If the majority votes to continue on the current topic, the timer is reset for another two minutes, after which the timer will take another continue/move on poll.
  • If enough people vote to move on to the next topic, the timer picks the next highest votes. The goal is to get through all of the topics.

You can modify the number of votes and timing as it makes sense for you, but this is what we went with. In this format, it’s easy to prevent two people from de-railing the conversation or monopolizing it since the topic is reviewed every two minutes.

This format sounds like it might be a bit high maintenance or disjointed with constant timer interruptions but when we practiced it in Glenn’s session, I found it efficient, and clear. I’ll definitely be trying it out at work.

Read More

Quick Tip - Re-Run The Last Command

Sometimes, while you’re poking around in the console, you want to re-run the last command. Sure, you can hit the up arrow and enter, but PowerShell always gives you multiple ways to do things.

It’s easy, using the Invoke-History cmdlet. You can also use its alias which is just r. Running that cmdlet without any parameters will run the last command from the console. Alternatively, you can specify a value for the ID parameter, and run one from further back in your history.

How do you find out what those IDs should be? Using Get-History to see everything you’ve run in this session, and the ID number for that command.

Read More

DevOps Story Time - Take A Risk - Risk Adverse Doesn't Mean No Risks Ever

After the modest success of my last DevOps Story Time post on getting out of your own way, I feel like it’s time for another. This time, on the value of taking risks, and taking away a win even when you realize one of the risks you were afraid of.

Easter weekend 2018, for my 30th birthday, my girlfriend and I went to Banff, AB to do some hiking. When we got there, she surprised me with my “real” gift: that we were going to go visit a wolf sanctuary in Golden, BC, and go on a hike with some real, actual, not-fooling-around wolves. Obviously, I was beyond excited.

When you get there, before you go on your walking with wolves adventure, you sign some waivers which include some items about releasing rights to photos, but also include some very specific, lengthy sections to the tune of “These are wild wolves. Although they’ve been imprinted on humans (aren’t afraid of humans like a normal wolf, this is why they’re in the sanctuary), these aren’t like visiting your friend’s Labradoodle. These are wolves, and wolves can be dangerous and unpredictable.”

Now, you’re out there with (excellent, knowledgeable, informative) guides, and this part of the sanctuary is definitely a tourist attraction, so there’s a pretty good feeling of safety that sets in quickly. We understood, and of course, accepted the risks that we’d been made aware of, and embarked on our walk.

[caption id=”attachment_715” align=”aligncenter” width=”604”] I got licked in the face by a wolf. Her name is Flora.[/caption]

 

[caption id=”attachment_716” align=”alignleft” width=”225”] There’s a couple more good ones above my hair line. She got a little close to my eye.[/caption]

For the most part, you and your small group walk through an area the guides are familiar with as the wolves do wolf things and dart around. They weave through the group plenty, you can pet them if they come up to you, and play and pose while people take pictures. At a certain point, the guides will find a good spot to take pictures of the guests in close proximity with a wolf. You don’t have to do it if you’re scared, but by this point, we felt really comfortable and were definitely not going to pass up a once in a lifetime opportunity to take a picture with a real, live wolf.

This was absolutely one of the coolest moments I’ve ever experienced, and a completely unforgettable memory. However, remembering that wolves can be a little unpredictable sometimes, she got a bit excited climbing all over me, and I came away with a pretty good scratch.

This is why you sign a waiver. And the scratch wasn’t really so bad. I cleaned it out, and took care of it, and will heal up just fine. This pic was taken when we got back to our car immediately after the hike. Pardon the toque hair.

 

What does this have to do with IT?

Believe it or not, there’s a DevOps related lesson to be learned here, and I’m not just bragging about my cool birthday experience, and subsequent Bond-villain-esque facial wound (maybe just a little).

I normally consider myself a pretty risk adverse person. If you saw my stock portfolio, you’d probably agree. When it comes to your IT department, most traditional enterprises consider themselves “risk adverse”, too. The thing is, though, people too often throw around the term “risk adverse” as a reason not to take a risk. Being adverse to risk isn’t a blanket excuse to never take any risks ever for any reason. By definition, it just implies a certain amount of (hopefully healthy) skepticism to avoid undue risks.

Don’t let anyone immediately dismiss an idea, whether its disruptive or not, simply on the grounds of being risk adverse. Every risk has an impact and likelihood (how severe it is, and how likely it is to actually happen), and if someone wants to dismiss an idea based on risk, it should be based on the impact and likelihood being too high.

Urge your stakeholders to identify a risk tolerance. They’ll give you a non-zero number. Make it out of 25. “On a scale of 0 - 25, what would you say our appetite for risk is?” Say they come back with 10. That’s somewhat risk adverse, right? Well, using the risk impact/likelihood model, give both the possible impact and likelihood a score out of 5. Multiply the two numbers to get a score out of 25, and compare that to your stakeholder’s risk tolerance. If a certain change comes with a risk that carries an impact of 2 out of 5 (if the change fails, users will experience a partial outage for 4 hours - but the company’s reputation will remain intact, no business will be lost, it’s really just an inconvenience) and a likelihood of 2 out of 5 (pretty unlikely, but possible), the risk score is 4/25. This is well below the identified risk tolerance level of 10, and this is leverage for you to go ahead with your change.

The scores are obviously subjective. A partial four hour outage for users may be a catastrophe if you’re an internet service or cellular provider, instead of an inconvenience in our scenario. It’s up to you to add reasoning to your scores so that your risk evaluation is more meaningful and valuable. Include mitigation plans that describe how you plan to lower the likelihood of a risk being realized and the impact if it is. Be sure, at some point to underscore the benefit of taking the risk. Actions can still be beneficial even if a risk is realized.

I strongly urge you to use this model and line of reasoning to combat the “well we are pretty risk adverse so it’s going to be a no” blanket excuse. Rather than letting people’s comfort zone drive your IT strategy, force a more thoughtful, well-reasoned decision. You might still get a “no”, but at least it will be one that’s thought out, rather than impulsive.

Bringing it together

Like I mentioned, I am pretty risk adverse, but I also don’t want to live a life without any fun and interesting experiences. I used the information that was available to me to enumerate the different risks of going out on a hike with wolves, the impact (mild disfigurement) and likelihood of those risks being realized (pretty low), including the mitigating factors of the guides, and weighed that all against my appetite for risk and the benefits of accepting the risk and doing it anyway.

Even though one of the risks was realized (my face is a little less pretty until this scratch heals up), the reward and benefit of the decision was still absolutely enjoyed. I had an amazing, once in a lifetime experience that will always be special to me. You can do the same in your own career, and at work. The road to success isn’t always well-paved.

Read More

April Fools PowerShell Prank - Write With All The Colors Of The Rainbow

Sometimes Write-Host gets a bad reputation. Lots of people will repeat inflammatory rhetoric that “Write-Host” kills puppies, and so on, but the only real problem with Write-Host is that people use it without knowing what it’s for. Write-Host is for writing to the console and only the console.

Other cmdlets like Write-Output are for writing to standard output which might be the console, or could be somewhere else down the pipeline. Write-Host’s output can’t be redirected to a log file, isn’t useful in unattended execution scenarios, and can’t be piped into another command. Lots of people who are new to PowerShell get into a habit of using Write-Host when they probably should have used Write-Output or something else instead. If you have someone you’re trying to train to stop using Write-Host when it’s not needed, consider this prank, just in time for April Fools Day.

There’s not much to it. Just add a couple of lines to their PowerShell profile.

$PSDefaultParameterValues.Add('write-host:foregroundcolor',{get-random $([system.enum]::getvalues([system.consolecolor]))})
$PSDefaultParameterValues.Add('write-host:backgroundcolor',{get-random $([system.enum]::getvalues([system.consolecolor]))})

This adds default values for the -BackgroundColor  and -ForegroundColor  parameters when Write-Host is called. If a script specifies a value for those parameters, those specific values will be used instead. If, instead, Write-Host is called without specifying values for the foreground and background color parameters, a random one will be used instead. It’ll look something like this.

Read More

Quick Tip - PowerShell Supports Partial Parameter Names

Did you know that PowerShell supports the usage of partial parameter names? This isn’t such a big deal since tab completion is a thing… and if you’re writing code, you want to use the full parameter name to provide clarity and readability… but sometimes this is handy. Whether it’s for code golf, or just noodling around in the console, you don’t have to specify the full name of a parameter, just enough for it to be unique.

Here are some examples.

Get-ChildItem -Pa c:\temp
# Pa is matching the Path parameter
# Note if you just do P, you'll be told that the parameter can't be processed and a list of possible matches
Get-CimInstance -Clas win32_operatingsystem
# Clas is part of the ClassName parameter

Obviously this isn’t something that you should be running wild with and using in all your production code, but maybe it’ll explain how some random code you found on the internet works.

Read More

PowerShell Tip - Another Take On Progress Reporting

Normally in PowerShell if you want to report progress on a long running task, you’d use a progress bar using the Write-Progress cmdlet. That’s definitely the right way to do this, but what if you wanted a different way… for some reason? In the PowerShell Slack (invite yourself: slack.poshcode.org), I recently answered this question: “I want to write out ‘There are 3 seconds remaining. There are 2 seconds remaining.’ etc. until there are no seconds remaining and then keep going, but I don’t want them all to appear on the different lines. I basically just want the number to update.”

This gif shows what the question asker was after (except instead of counting up, they wanted a countdown).

So, then, how do we get that? Well, the answer is ANSI Escape Sequences! These are encoded instructions included in a string to direct the console about how to change or manipulate the output. I use them in my prompt.

First, let’s just get our countdown - er, I mean countup… working. This is pretty straight forward.

1..10 | % { "There are $_ s remaining"; start-sleep -seconds 1 }

This will write everything on its own line, like this.

There are 1 s remaining
There are 2 s remaining
There are 3 s remaining
There are 4 s remaining
There are 5 s remaining
There are 6 s remaining
There are 7 s remaining
There are 8 s remaining
There are 9 s remaining
There are 10 s remaining

Now, for the fanciness, what we really want is for the same line to get overwritten.

You can write an ESC inside of a string just like you would any other character. It’s represented by char 27. So we’ll set that equal to a variable $E.

$E = [char]27

Now we can embed it in strings, and we just need the rest of the sequence. If you scroll enough on that Wikipedia page linked above, you’ll get to the CSI sequences section, which basically all start with the escape character and then an open square bracket (this character: [). At the bottom of the table, you’ll notice s and u sequences for saving and restoring the cursor position.

So all we need to do is save the cursor position when we start, and then restore it each time we want to overwrite the line.

"${E}[s"
1..10 | % { "${E}[uThere are $_ s remaining"; Start-Sleep -Seconds 1 }

On the first line, all I’m doing is saving the cursor position. I wrap the E in $E in curly braces so it doesn’t think the square brace or the s is part of the name of the variable. You don’t have to do this for this escape sequence since the square brace isn’t a valid character in a variable name, but for some other ANSI stuff, you might want to get into this habit.

Then on the next line, I’ve just got a foreach-object loop (alias is %) that writes the same line over and over and sleeps for one second. The line it writes restores the cursor position to the one that was saved on the line above and then just writes “There are x s remaining”. We’re overwriting the same line over and over.

This works in our scenario because the line we’re writing text of the same length or longer. If you want to see this activity look a little odd, you can try something like this.

"${E}[sHello"
start-sleep -seconds 1
"${E}[uHi"

We’re saving the cursor position, writing “Hello”, waiting a second, then restoring the cursor position and writing “Hi”. We’ll see “Hello” for a second then the resulting line that comes afterwards looks like this.

Hillo

This happens because we restored the cursor position and just started writing more characters. So, be careful of this if you’re using this trick in your own scripts. You can use the .PadRight() and .PadLeft() methods that are built into strings to try to fix this, or something more dynamic like detect the length of the strings you’re writing.

"${E}[sHello"
start-sleep -seconds 1
"${E}[uHi".PadRight(20)

Notice on the last line, I’m using the .PadRight() method to add 20 characters of whitespace which will overwrite all of the rest of the text that wasn’t being overwritten before.

Read More

Quick Tip - Update a Tag on an Azure Resource

Working with Azure resources can be a bit of an adventure sometimes. Say you want to update a tag on an Azure resource. Not remove it, but change its value. If you try to add a tag with the same name but different value, you’ll get an error that the tag already exists. Some of the ways you have available to get rid of a tag involve dropping all the other tags assigned to a resource. So, what do you do?

In this example, I have a couple VMs with a tag named “user” and a value of “thmsrynr”, and I want to keep the tag but change the value to “Thomas”.

Well, this extravagant one-liner will do the trick.

Find-AzureRmResource -TagName user | 
    ForEach-Object { Get-AzureRmResource -ResourceId $_.ResourceId |
    ForEach-Object { $tags = $_.Tags; 
                     $tags['user'] = 'Thomas'; 
                     Set-AzureRmResource -Tag $tags -ResourceId $_.ResourceId } }

I like Find-AzureRmResource best for searching for resources with a specific tag, but it doesn’t return the tags for some reason that is beyond me. You can search by tag but the tags aren’t returned? Weird, right?

Anyway, I pipe everything I find into Get-AzureRmResource which is bad at searching for resources but DOES return the tags. Then for the resource I find, I store the tags on that resource in a temporary variable (named $tags), and then I work with the variable instead of working directly with the Azure object to update the “user” tag I care about. Then I set the tags on that resource to be what I stored. This should keep all the other tags intact, and update the value of one specific tag.

You could, and probably should, expand this into a more flexible function with parameters and filtering and such, but this example shows you how the tricky bits work.

Read More

Azure Automation - Diving Deeper (Pluralsight Course)

I’m very excited to share that my newest Pluralsight course was published over the weekend: Azure Automation: Diving Deeper. This builds on my first course, Getting Started with Azure Automation.

Pluralsight is a paid service but trials are available, and it’s a benefit of having an MSDN subscription. They’ve got thousands of hours of good stuff for people working in all areas of technology, including my new course.

My Azure Automation: Diving Deeper course will teach you everything you need to know to put Azure Automation on your resume, market yourself as an IT Automation pro, and increase your worth as a professional. Please check it out and don’t hesitate to contact me with any questions or feedback.

As a Pluralsight author, I am compensated for creating courses, so this is technically a sponsored post. I do, however, truly believe in their service, and think that many people who read my blog may benefit from watching my courses.

Read More

Regex Example - Strip Out HTML Tags

First and foremost, HTML is not regex friendly. You should not try to parse HTML in PowerShell, or using regular expressions unless you’ve lost some kind of bet or want to punish yourself for something. PowerShell has things like ConvertTo-HTML that will make that kind of thing way less migraine inducing.

That said, I recently had a situation where I just wanted to strip all the HTML tags out of a string. My input looked something like this (assigned to a variable $html).

&lt;html&gt;
&lt;body&gt;
&lt;p&gt;This is an important value&lt;/p&gt;
&lt;/body&gt;
&lt;/html&gt;

All I want is the “This is an important value” part, so this seemed like a place where the “don’t use regex on HTML” rule could be broken. It’s even a pretty simple regex.

$html -replace '(&lt;\/*\w+?&gt;){1,}'

You’ll have to wrap it in round brackets and use a .trim() to clean up white space, but this will work for the “get rid of the HTML” goal. Let’s break down this regular expression to see what it’s doing.

Starting on the far right side, the {1,} is specifying “one or more” or the pattern that precedes it, in this case, the rest of the expression wrapped in round brackets. Inside those round brackets is a patter which states “an angle bracket (<), zero or more forward slashes escaped by a back slash (\/*), as many alphanumeric characters as it takes (\w+?) to get to a closing angle bracket (>)”. It just rolls off the tongue, right?

Basically we’re looking for any opening or closing HTML tag. We’re not capturing some HTML, though like img or tags that can have other values inside them (like <img src=”pic.png” />) but the regex in this example can easily be built upon to include examples like that, now that you’ve got this far. You could even just replace the \w with [^>] which means “any character except a closing angel bracket”.

Happy regexing!

Read More

Quick Tip - Open A File In Default Program

When you double click a file in Explorer.exe, it automatically opens in its default program if it has one associated with its type. But did you know you can do the same thing using PowerShell?

Most people are aware of Start-Process, the PowerShell cmdlet for starting processes on the computer, but most people just use it for executing things they’d normally just look to run in cmd.exe like installers. But Start-Process has a -FilePath parameter that you can use to open a file in its default program.

Start-Process -FilePath C:\temp\opens-in-excel.csv

On my computer, this command opens the given CSV in Excel. This is handy in situations where you’re poking around in the console, exporting files, and then perhaps you wish to open that file and rather than clicking around inefficiently through Explorer.exe, you can just launch it from the PowerShell CLI.

Read More

Quick Tip - Did the last command work or not?

In PowerShell, there is usually at least a few ways to do most tasks and detecting if the last command resulted in an error or if it worked is no exception. You could wrap code in a try/catch block, but sometimes that’s overkill. Regardless of your reason for wanting to get the work/borked status of the last command, here are a couple simple ways of doing it.

The Get-History cmdlet is a great way to get this and other information for commands you executed in your current session. If you run it, you’ll get an ID and a “CommandLine” which is the command that you ran, but if you pipe into Select-Object -Property *, you’ll see that there’s also an ExecutionStatus and times for start and end. That ExecutionStatus is what you’re looking for in this case.

If you just run Get-History | Select * then you’ll get those pieces of information for every command you ran in your current session. If you only want to see the information for the last command you ran, just remember that you’re working with an array and run (Get-History)[-1] | Select *. That will get the last item in the array of command information returned by Get-History and show you that info.

If you truly just want to know if the last command completed successfully or not, there’s a special variable built into PowerShell for it. It’s $?. Just type $? into your console and you’ll get back either True or False, a boolean value for if the last command completed successfully or not. Pretty handy shortcut, no?

Read More

Looking for someone to do a session on PowerShell (or DevOps or IT strategy or cloud architecture)? I'm your guy.

Are you a user group leader or event organizer who’s looking for speakers? I’d love to connect. I do my best to keep my eye out for CFPs and other speaker solicitations, but it doesn’t hurt to advertise my availability. Most of the dates I’m available to travel for speaking events in 2018 are taken, but I still have a bunch of dates I’m available to do virtual and remote events.

Here’s a list of sessions and their abstracts that I’ve got prepared and would love to present. If you see one you like, I’m best reached by email at thmsrynr@outlook.com or on Twitter at @MrThomasRayner. My bio is on the About page of this blog. If you like me but don’t see a session your attendees would love, I hope you’ll reach out anyway and we can see what I can come up with specifically for your event.

Session List:

PowerShell Release Management in Action

A year ago, I created a release pipeline for my team’s PowerShell code. It got off to a rocky start but we’re cruising now. Come see what I delivered and how it works. I’ll also tell you what went well, how it evolved, and the outright mistakes I made.

In this session, I’ll be talking about the real, in-production, actually being used release management program that I delivered, manage and participate in at my current job. This isn’t some theoretical thing tossed together for a couple blog posts, or hobbled together for use with one community project. My team and I use this on every PowerShell coding project we take on. It’s flexible enough to be a one size fits (almost) all solution, while being rigid enough to ensure top quality for the solutions we deliver.

I’ll show attendees a real example of a request for automation going through the entire release management process, and share details about what I found worked well, what seemed like a good idea at the time, and managing obscure expectations laid upon me by management. I’ll cover everything from “I have an idea for some code” to “code’s running in prod”, and everything in between.

I Did DevOps Wrong But You Don't Have To

It’s a popular term so, you know probably feel like you know what DevOps is, may be trying to implement it where you work, and are maybe even doing a good job. On the other hand, maybe not. Either way, I tried and failed a bunch before getting it right. Come learn from my mistakes!

If something is hard, that usually means it’s worth doing. I’ve been down the “heard of DevOps before anyone else I work with” and the “I saw Don Jones describe the DevOps or ‘would you like fries with that’ choice” path , I got enthusiastic and wanted to ram DevOps principles down everyone’s throat. Guess what? Sometimes people resist change. Surprise! In this session, I’ll tell you about the mistakes I made, how I sold my organization on DevOps, and all the ways I’ve seen organizations screw DevOps up so you can avoid my mistakes.

A Crash Course in Writing Your Own PSScriptAnalyzer Rules

PSScriptAnalyzer is great. You use it to check all your code to make sure it follows PowerShell best practices, right? In this session, I’ll show you how to take your PSScriptAnalyzer skills to the next level by showing you how to write your own custom rules, and make PSSA check your code for them.

PSScriptAnalyzer is great, not just because it comes with a bunch of rules that Microsoft and the community support, but because it allows you to put your own rules on top of (or instead of) it. Maybe you want to make sure that you’re using camelCase for your variables but PascalCase for your parameters. You’re going to need to write your own rule for that one. Writing your own PSSA rules can be intimidating up front, but I’m going to share some examples of rules I’ve written, used, and even got implemented as rules included with PSSA, to show attendees the unique authoring process, how to get started ripping apart the AST, and making their lives better with custom PSSA rules.

Remote Management of SQL Servers with PowerShell

Whether you’ve tried out PowerShell or not, make no mistake: PowerShell is here to stay. As a SQL Server pro, it’s in your interest to learn this powerful and robust language, and use it to automate tasks you come across regularly.

Join Thomas Rayner, PowerShell MVP and Honorary Scripting Guy, in this demo-packed session and learn about some of the different PowerShell-based tools available to you. You’ll be riding along with a PowerShell Pro, building a proof-of-concept script that uses different techniques to administer SQL servers.

DevOps Questions Answered

Come join Microsoft MVP and Honorary Scripting Guy Thomas Rayner to chat all about an explosively popular IT methodology: DevOps. Whether you’ve never heard of the term before, or you’re in the midst of adopting DevOps methods, bring your questions to this interactive session, or email them ahead of time to <todo: insert email address>. In this open discussion session, feel free to chime in with your own experiences, ask whatever questions come to mind, or just soak it all in.

Azure Portal vs Azure PowerShell Module Smackdown

Lots of people depend on the Azure Portal (accessed through the web browser) to administer Azure. The AzureRM PowerShell module, however, offers the same and often more features than the Portal does, and opens the door to automate administrative tasks in ways that are impossible through the GUI.

In this demo-heavy session, you’ll learn where the overlaps are, and what’s unique between these two options for administering Azure. Plus, you’ll pick up some quick tips for automating mundane and time-consuming Azure-related tasks.

Introduction to Azure Blob Storage

Azure Blob Storage is an exa-scale storage service from Azure that allows for scalable, efficient storage for petabytes of unstructured data. In this session, you’ll see a couple accessible demonstrations of how this service can be used.

Regex for Complete Noobs

Regular expressions are sequences of characters that define a search pattern, mainly for use in pattern matching with strings. Regular expressions are extremely useful to extract information from text such as log files or documents. If you don’t know basic regex, you’re missing out on a hugely important tool. Get some knowledge in you, and check out this session on regex.

The Gift of Community

Microsoft MVPs have hugely diverse backgrounds, expertise, and strengths, but one thing every MVP shares is a passion for COMMUNITY. Giving back to the technical community is how MVPs get awarded, but getting awarded is not why MVPs do what they do. Join us in this session to learn about participating in technical communities, and you’ll find the motivation to get going too!

Writing Your First Azure Automation Runbook

Azure Automation is a core service offered by Azure that allows people to run scripts and workflows unattended, on a schedule, on demand, on infrastructure they don’t have to worry about. Azure Automation is a very flexible tool that anyone with any Azure presence at all should be looking into.

In this demo-heavy session, you’ll get a short crash course in writing your first Azure Automation runbook from Thomas Rayner, who’s responsible for several Azure Automation PluralSight courses.

Writing Your First Azure Function

Rather than worrying about maintaining servers, Azure Functions allow you to focus on building great apps. Functions provides a fully managed compute platform with high reliability and security. With Scale on demand you get the resources you need, when you need them. You can create functions in tons of languages including JavaScript, C#, F#, Python, PHP, Bash, Batch, and of course, PowerShell.

In this demo-heavy session, you’ll see how to create a simple Azure Function and learn the fundamental skills needed to take advantage of this powerful, flexible platform.

Stupid PowerShell Tricks

Don’t get got by these gotchas. Come learn how to avoid some common troubles, and discover some new tricks in PowerShell that will take your scripts to the next level. Take it from this pro, it’s better to hear about these tricks in this session than after fighting with a script for a week.

How To Write (PowerShell) Code That Doesn't Suck

Do you write code? Even a little? Coding is quickly becoming a necessary skill to have. We’re going to get into some of the things that makes your code sad, and some of the things you can do to make it happy again. Want to know how to make your scripts run faster? Want to know how to get other people to stop asking you how your code works? Want to be a valuable member of the IT community? Well, get in here and I’ll show you.

Read More

PowerShell - Control What Order Properties Are Displayed On A Custom Objects And Hash Tables

There are a handful of different ways to create custom objects in PowerShell, including building one from a hash table. You might do something like this.

PS&gt; $props = @{'prop1' = 1; 'prop2' = 2}
PS&gt; $obj = New-Object -TypeName PSObject -Property $props

But then, just run $obj and see what you get. This is what I got.

PS&gt; $obj

prop2 prop1
----- -----
    2     1

It put prop2 before prop1 even though I put prop1 first in the hash table! Most of the time, this doesn’t matter, but what about when it does?

Sure, you could use Select-Object to accomplish this.

PS&gt; $obj | Select-Object -Property prop1,prop2

prop1 prop2
----- -----
    1     2

But that seems excessively verbose and inefficient. Luckily there is a better way. Introducing Ordered Hash Tables!

Ordered hash tables are pretty much what they sound like. By default, a hash table is a collection of objects (key and value pairs) whose order isn’t important. An ordered hash table is the same thing, but where the order they are in the hash table matters, and they’re pretty easy to use. Our first lines of code from this example becomes this.

PS&gt; $props = [ordered]@{'prop1' = 1; 'prop2' = 2}
PS&gt; $obj = New-Object -TypeName PSObject -Property $props

Just add the [ordered]  accelerator to the hash table, and now PowerShell will respect the order that you enter items into your hash table.

 

Read More

DevOps Story Time - Get Out Of Your Own Way

Starting now, I’m experimenting with new post formats on my blog. Instead of just technical posts describing code, I’m going to begin posting some more free-form articles. Like this one, where I’m going to share a story with you that has some moral relating back to IT.

It was the start of December 2017 and I was in Toronto to attend MVP Community Connection day, which is an event exclusively for Microsoft MVPs where we get together, socialize and connect with each other and Microsoft employees, get a little soft skills training, and provide feedback on things we’d like to see from Microsoft in the upcoming months. MVPs from across Canada traveled to Toronto to enjoy this always enjoyable event.

Microsoft is kind enough to supply accommodations at a hotel I won’t name, and I had the great pleasure of sharing a room with fellow MVP and all-round good guy, Will Anderson. We had a great time except for one small inconvenience. From the moment we first used it, it was apparent that our toilet had issues. This thing wouldn’t even flush the water that it filled itself up with. I know what you’re thinking, but we treated it with respect.

[caption id=”attachment_669” align=”alignleft” width=”300”] Will and myself at MVP Summit 2016, preparing content for the 10th anniversary of PowerShell celebration.[/caption]

No big deal, we called the front desk and asked them to send someone to try and fix it. One more problem, though. It was after midnight by the time we got back to our room and that meant the normal maintenance people had gone home. So, the person they sent shows up empty handed, tries flushing the toilet again (like we didn’t try that), and said that was all she could do. Neither Will nor myself are plumbers, but we suggested maybe this person might try to use a plunger on the situation. She replied that she did not have the key to the plunger but could send someone at 7 AM with one. That’s right. This person responding to our potential plumbing emergency did not have the key to the plunger. Luckily, we didn’t have any urgent “needs” and nothing was in any risk of impending doom.

What’s the moral here? How does this relate to IT? Well, imagine you find yourself in an emergency situation - you’ve been hit by ransomware, your CFO’s Active Directory credentials have been compromised, a Hyper-V host just went down, or some other metaphorical “toilet” has become “clogged”. You have people there who are willing to help, just like Will and I had this maintenance person in the hotel, but are they able to actually do what they need to do in an emergency?

Say your IT staff need to respond to an emergency. Have they practiced normal emergency response scenarios? Does everyone know their role and responsibilities? Is anyone going to get stuck on managerial approval, missing a privilege, or following out dated instructions? Obviously safety measures and safeguards are important, but there’s a point of diminishing returns (can’t unclog a toilet because you don’t have the plunger key, can’t temporarily disable a compromised CFO’s account because you don’t have permission to disable executives’ accounts, etc.).

In addition to making sure you haven’t hit that point of diminishing returns with your IT safeguards, know that your disaster recovery and emergency reaction manuals are only as valid as the last time you tested them. You might be doing a great job of backing up your NAS every night, but how good are you at restoring it?

The bottom line is: make sure you stay out of your own way when it comes to IT emergencies, and test to make sure your steps to respond are actually valid. It could save you a lot of pain and embarrassment one day!

Read More

PowerShell Regex Example - Incrementing Matches

In the PowerShell Slack, I recently answered a question along these lines. Say you have a string that reads “first thing {} second thing {}” and you want to get to “first thing {0} second thing {1}” so that you can use the -f  operator to insert values into those spots. For instance…

"first thing {0} second thing {1}" -f $(get-date -format yyyy-MM-dd), $(get-random)
# Will return "first thing 2018-01-10 second thing &lt;a random number&gt;"

The question is: how can you replace the {}’s in the string to {<current number>}?

You might think you could do something like this.

$string = 'first thing {} second thing {}'
$i = 0
[regex]::replace($string, "\{\}", {"{$($i)}"; $i++})

But you’d be disappointed.

If you’re not sure what’s going on, the [regex] accelerator has a replace  method that works like -replace  does. It takes three arguments: the string that is being worked on, the pattern being detected, and what to replace the pattern with. In this case, I gave it a scriptblock to replace it with the value of $i  and then increment the value of the variable by one. Unfortunately, it doesn’t look like $i  gets incremented. Everything gets replaced with a {0}  instead of an incremented number.

Long story short, this is a PowerShell scoping issue. If you make $i  a part of a bigger scope like global, script, or something that makes sense for your situation, you’ll get the results you desire (global is probably not the best choice).

$string = 'first thing {} second thing {}'
$global:i = 0
[regex]::replace($string, "\{\}", {"{$($global:i)}"; $global:i++})

#returns first thing {0} second thing {1}

And this will work just fine in your -f  replacement operations!

Read More

HackTheBox.eu Walkthrough - Apocalyst

If you’re a frequent reader of my blog, you know that I mostly post about PowerShell, Microsoft related automation, and that sort of thing. In a previous life, however, I thought I wanted to make a career out of infosec - particularly penetration testing and red team type of stuff. I’m super happy with where my career went instead, but from time to time, I enjoy attempting to knock some of the rust off my ethical hacking/pentesting skills (what little of them there are), and trying my hand at some vulnerable by design boxes. Since it’s the holiday season, I decided to switch things up a little bit for the last couple blog posts.

HackTheBox.eu offers a cool variety of vulnerable by design virtual machines for people to practice their pentesting skills against. There are strict rules about sharing spoilers for “active” boxes, but there are only so many of those, and lots of “retired” boxes are available as well. In today’s post I’m going to share a walkthrough of how I did the retired box “Apocalyst”.

I am not a professional penetration tester or red teamer, nor is this meant to be the type of write up that I’d provide to a client if I was doing this for money. This is just a summary of what I did to get the user and root flags on the box. I’m not going to get into any of the rabbit holes or areas that didn’t lead to a solution (because this isn’t a real write up).

First things first, I ran nmap to see what might be up and running on the box. I ran safe scripts, enumerated versions, and saved all output with the file basename “nmap”.

nmap -sC -sV -oA nmap 10.10.10.46

You’ll quickly find a Wordpress site is up and running. If you run wpscan –enumerate u or just look around at some of the posts, you’ll find a username: falaraki. If you use dirbuster and any of the normal wordlists, you may have a challenging time. I used cewl to make a wordlist for dirbuster after receiving a small nudge in the HTB Slack channel. You’ll find a whole pile of directories, each one containing a seemingly identical image. Sounds like we’re in for everybody’s favorite thing in the world: steganography!

I wrote a quick script to download all of them because I love taking up tons of diskspace on my VM, and you’ll find that the image in the “Rightiousness” directory looks the same as the others but has a larger file size. You can use steghide extract -sf rightiousness.jpg with no password to extract a wordlist hidden in the larger image. Maybe this is why the box is called Apocalyst… apoca… list. Wordlist. Maybe.

I used hydra on the Wordpress login page with the falaraki username I found earlier and the wordlist that came out of the image. The Wordpress password for falaraki is in the list, and I got in. This password does not work for the same user or root on SSH. As far as I could tell, the stegoed wordlist doesn’t contain any other useful passwords.

I uploaded my own plugin to the Wordpress site, since falaraki is a Wordpress admin, which executed PHP to get the user flag and a reverse shell onto the box as the www-data user.

If you explore /home/falaraki, you’ll find a hidden file named “.secret”. Check it out, and you’ll see pretty quickly that it’s a base64 encoded file. Decode it to find a password for falaraki that you can use to SSH into the box, elevating yourself from www-data to falaraki.

After running some privilege escalation enumeration scripts, it quickly became apparent that falaraki has write permissions on /etc/passwd, so I added a uid 0 user with a password that I knew, authenticated as that user, and used my root privileges to get the root flag.

Overall, I didn’t love this box. It felt a little too capture-the-flag-y with all the images and then a wordlist being what was hidden via steganography. Still, I learned about some basic stego techniques which I appreciated, and exploiting the ability to upload my own Wordpress plugins.

Read More

HackTheBox.eu Walkthrough - Blocky

If you’re a frequent reader of my blog, you know that I mostly post about PowerShell, Microsoft related automation, and that sort of thing. In a previous life, however, I thought I wanted to make a career out of infosec - particularly penetration testing and red team type of stuff. I’m super happy with where my career went instead, but from time to time, I enjoy attempting to knock some of the rust off my ethical hacking/pentesting skills (what little of them there are), and trying my hand at some vulnerable by design boxes. Since it’s the holiday season, I decided to switch things up a little bit for the next couple blog posts.

HackTheBox.eu offers a cool variety of vulnerable by design virtual machines for people to practice their pentesting skills against. There are strict rules about sharing spoilers for “active” boxes, but there are only so many of those, and lots of “retired” boxes are available as well. In today’s post I’m going to share a walkthrough of how I did the retired box “Blocky”.

I am not a professional penetration tester or red teamer, nor is this meant to be the type of write up that I’d provide to a client if I was doing this for money. This is just a summary of what I did to get the user and root flags on the box. I’m not going to get into any of the rabbit holes or areas that didn’t lead to a solution (because this isn’t a real write up).

First things first, I ran nmap to see what might be up and running on the box. I ran safe scripts, enumerated versions, and saved all output with the file basename “nmap”.

nmap -sC -sV -oA nmap 10.10.10.37

You’ll quickly see that there is a Wordpress site running, and when you visit it, it appears to be a place for information on someone’s Minecraft server. After some basic poking around, and taking notice of the “Notch” username of the person who made all the posts on the site, I ran dirbuster on it to find any possibly interesting directories or files.

Poking around in the dirbuster results, I found a /wp-content/uploads folder that contained two .jar files. It was around this time that I learned that .jar files are basically just archives and can be unzipped with things like unzip on the Linux CLI. Predictably, there are some .class files in the extracted data, which can be decompiled using javap.

If you search through the decompiled classes, you’ll find a hard coded password for “root”. This isn’t the root password, but rather is for something Minecraft related. Still, it seemed too good to be true, so I started sticking it other places, using the “Notch” username that was found earlier.

Turns out, password re-use is at play here and you can SSH into Blocky using the “Notch” account and the password from the decompiled .class. Notch can read the user flag.

I spent far too long enumerating and poking around looking for a privilege escalation of some kind, when if you just run sudo -l, you’ll see that Notch has full rights, and so you can sudo anything you want, including reading the root flag.

Even though Blocky was a very easy box to pop, I still learned about extracting jars and decompiling classes, which made me appreciate it for what it is. I also learned that there are a few pieces of low-hanging privilege escalation fruit to check on every box before going too nuts enumerating everything.

Read More

HackTheBox.eu Walkthrough - Europa

If you’re a frequent reader of my blog, you know that I mostly post about PowerShell, Microsoft related automation, and that sort of thing. In a previous life, however, I thought I wanted to make a career out of infosec - particularly penetration testing and red team type of stuff. I’m super happy with where my career went instead, but from time to time, I enjoy attempting to knock some of the rust off my ethical hacking/pentesting skills (what little of them there are), and trying my hand at some vulnerable by design boxes. Since it’s the holiday season, I decided to switch things up a little bit for the next couple blog posts.

HackTheBox.eu offers a cool variety of vulnerable by design virtual machines for people to practice their pentesting skills against. There are strict rules about sharing spoilers for “active” boxes, but there are only so many of those, and lots of “retired” boxes are available as well. In today’s post I’m going to share a walkthrough of how I did the retired box “Europa”.

I am not a professional penetration tester or red teamer, nor is this meant to be the type of write up that I’d provide to a client if I was doing this for money. This is just a summary of what I did to get the user and root flags on the box. I’m not going to get into any of the rabbit holes or areas that didn’t lead to a solution (because this isn’t a real write up).

First things first, I ran nmap to see what might be up and running on the box. I ran safe scripts, enumerated versions, and saved all output with the file basename “nmap”.

nmap -sC -sV -oA nmap 10.10.10.22

Among other things, one of the items that is immediately revealed is that Europa is running a web server, and has an SSL certificate protecting the HTTPS part. Right there in the nmap output, you can see that the certificate has a subject alternative name for an “admin-portal” subdomain.

Obviously with a name as juicy as “admin-portal”, that’s where I’m going to start. You can find the login.php page there. As with any login form, I tried a couple manual SQL injection techniques but wasn’t immediately granted access, so I used Burpsuite to capture a post request to login.php and sent it over to sqlmap. Eventually sqlmap will pop that login form open for you and get you redirected to dashboard.php.

Poking around the admin portal, you’ll quickly find a page named tools.php that has a “VPN config builder”. It appears to be doing some sort of find and replace to plug in a value that you provide. Immediately, this seems like a place to insert some code.

Capture one of the posts to the config builder and you’ll see that there are actually a few variables being sent. One for the pattern being detected (where the “IP” will be inserted), the IP/value that will be inserted into the config, and a base for the config itself (which includes the pattern in the first variable. PHP uses a function called preg_replace to do this kind of find and replace functionality, and doesn’t execute code by default. There is, however, a way to make it do just that.

If you replace the IP address to be inserted into the config with bash code, it will take the literal bash code that you entered and write that out without executing it. If you append an “e” to the first argument that preg_replace takes, though, then it will evaluate the bash code and replace your pattern with the value. So if you change the pattern to “/mything/e”, the value to replace it with (variable named IP) to some bash code, and then the rest of the config to just be “mything”, that will make it easier to comb through the output. This can all be done in Burpsuite repeater, and you’ll see the output of the bash command you put in the IP field instead of the literal string that you put in there. You can use this technique to get the user flag and to create a reverse shell.

If you run any decent privilege escalation enumeration script after you’re logged in with your reverse shell (I prefer LinEnum.sh but I’m a noob so do what you like), you’ll see that there is an unusual PHP file being executed by root via cron every minute. You can read it and see that it clears logs and then calls another script when it’s done. This second script that it calls is world writable, so you can put anything you want in it, and it will be executed as root in about a minute. You can use this technique to get the root flag, and you’re done.

Overall, I enjoyed this box. I learned about the preg_replace attack vector which I didn’t previously know about, and I like learning new things. The privesc was really straight forward, but sometimes that’s not so bad.

Read More

Formatting Strings In PowerShell Using Fixed Width Columns

Working with strings in PowerShell is fun, I don’t care what you say. In this post, I’m going to show you how to clean up the strings your code outputs, at least in some situations.

Say you have a variable $fileExtensions which you populated with this command.

PS&gt; $fileExtensions = Get-ChildItem | Group-Object -Property Extension

And, for some reason, instead of the default output which is formatted like a table, I want output presented like this.

.ps1     file extension: 11
.xlsx    file extension: 3
.dll     file extension: 1

This is a silly example, but notice that even though there are extensions of varying length (.ps1 and .dll are four characters including the dot, and .xlsx is five), all of the “file extension: <number>” is aligned.

How’d I do that? Let’s start with some code that doesn’t work.

PS&gt; $fileExtensions | foreach-object { "$($_.Name) file extension: $($_.Count)" }

.ps1 file extension: 11
.xlsx file extension: 3
.dll file extension: 3

How incredibly unfortunately unattractive! Luckily, it’s not too hard to fix. Check out this code.

PS&gt; $fileExtensions | foreach-object { "{0,-8} file extension: {1}" -f $_.Name, $_.Count }


.ps1     file extension: 11
.xlsx    file extension: 3
.dll     file extension: 3

Oh yes look at that goodness. In this example I’m using the -f  operator to insert the variables into the string. Let’s break down the new string I’m creating.

{0} and {1] are basically placeholders. The -f operator is going to insert the variables that come after it ($_.Name and $_.Count) into the 0 and 1 spots.

The ,-8 is a formatting instruction. The 8 means that this part of the string is going to be a column that takes up 8 characters worth of space. The negative sign means that the data inserted is left aligned. If I had used “positive eight” it would have been right aligned.

Now you can take this and run, to do fun things like this.

# Input
$header = "[$(Get-Date -format G)]"
Write-Output "$header First line"
Write-Output $("{0,$($header.Length)} Second line" -f " ")
Write-Output $("{0,$($header.Length)} Third line" -f " ")

# Output

[11/2/2017 12:47:58 PM] First line
                        Second line
                        Third line
Read More

Beginner PowerShell Tip - The .Count Property Doesn't Exist If A Command Only Returns One Item

If you’re just getting started in PowerShell, it’s possible that you haven’t bumped into this specific issue yet. Perhaps you’ve got a variable $users and you’re assigning it a value like this.

PS&gt; $users = Get-ADUser -Filter "samaccountname -like '*thmsrynr'"

This will get all the users in your Active Directory whose username ends with “thmsrynr”.

Great! Now how many users got returned? We can check the Count property to find out.

PS&gt; $users.Count
3

Looks like there are three users in my AD that got returned. Now the problem at hand, what if there’s only one user returned? What if only one user in my AD has that kind of username? I’ll end up with this.

PS&gt; $users.Count
# Nothing gets returned...

Even though I can do this and see there is one user in there.

PS&gt; $users

DistinguishedName : CN=ThmsRynr,OU=Users,DC=PCLINC,DC=domain,DC=tld
Enabled           : True
GivenName         : Thomas
Name              : Thomas Rayner
ObjectClass       : user
ObjectGUID        : &lt;snip&gt;
SamAccountName    : ThmsRynr
SID               : &lt;snip&gt;
Surname           : Rayner
UserPrincipalName : thmsrynr@outlook.com

What if I was doing something like this?

if ($users.Count -gt 0) {
    # Do something
}
else {
    # Do something else
}

Since $users.Count is null even when there’s one user in there, my if statement won’t work correctly. Well, you can take a bit of a shortcut and do something a bit different when you’re assigning a value to $users.

$users = @(Get-AdUser -Filter "samaccountname -like '*thmsrynr'")

By wrapping the command in @( ) we are forcing $users to be an array even if only one item is returned.

This issue happens because PowerShell loves to unroll arrays and other collections for you. By doing this workaround, if there’s only one AD user whose username ends in thmsrynr, you’ll still end up with an array with a single item in it, and $users.Count will return “1” like you expected.

Read More

Beginner PowerShell Tip - Using Variable Properties In Strings

If you’re just getting started in PowerShell, it’s possible that you haven’t bumped into this specific issue yet. Say you’ve got a variable named $user and this is how you assigned a value to it.

$user = Get-AdUser ThmsRynr

Using the Active Directory module, you got a specific user. Now, you want to report two properties back to the end user: SamAccountName and Enabled. The desired output looks like this:

The account ThmsRynr has enabled status of True.

So, you try something like this.

Write-Output "The account $user.SamAccountName has the enabled status of $user.Enabled"

And you’ll get something totally unexpected!

The account CN=ThmsRynr,OU=Users,DC=domain,DC=tld.SamAccountName has the enabled status of CN=ThmsRynr,OU=Users,DC=domain,DC=tld.Enabled

Whaaaat? That’s not what we want. What happened? It looks like I got the distinguished name of the user and then literally “.SamAccountName” and “.Enabled”. Doesn’t PowerShell know that I actually want the SamAccountName and Enabled properties?

Well, the short answer is no, PowerShell doesn’t know that. What if you had a variable $domain set to “workingsysadmin” and wanted to do Write-Output “$domain.com” to get “workingsysadmin.com” written out? PowerShell doesn’t know if you’re trying to access a property on the variable, or work with .com (or .SamAccountName or .Enabled) as a literal string.

So what do we do? We’ll use some brackets.

Write-Output "The account $($user.SamAccountName) has the enabled status of $($user.Enabled)"

This will give the desired output. What we’ve done is use $( ) to tell PowerShell that we want to evaluate the entire expression wrapped in those brackets, and use that in our string.

Read More

Azure Resource Manager PowerShell Module Quirk

If you’ve used the Azure Resource Manager (AzureRM) PowerShell module much, you may have noticed it may sometimes behave strangely. In this post, I’m going to share one that had me stuck for longer than I care to admit…

So, here’s the situation. I was working in the PowerShell console, looking to enumerate the automation schedules (that kick off Azure Automation runbooks) in one automation account. I ran this command to get started.

Get-AzureRmAutomationSchedule -ResourceGroupName 'my-rg' -AutomationAccountName 'my-aa'

I got predictable output, too. Here’s the example output for one of the schedules.

StartTime              : 9/30/2017 12:01:00 PM -06:00
ExpiryTime             : 12/31/9999 4:59:00 PM -07:00
IsEnabled              : True
NextRun                : 11/30/2017 12:01:00 PM -07:00
Interval               : 1
Frequency              : Month
MonthlyScheduleOptions :
WeeklyScheduleOptions  :
TimeZone               : America/Denver
ResourceGroupName      : my-rg
AutomationAccountName  : my-aa
Name                   : General Name
CreationTime           : 9/20/2017 10:26:26 AM -06:00
LastModifiedTime       : 9/20/2017 10:26:26 AM -06:00
Description            :

There’s just one problem. I know for a fact this schedule runs on the last day of every month, and includes data in the MonthlyScheduleOptions field which is empty above. What gives?

Well, as it turns out, the monthly and weekly schedule options are only returned if you specify a specific automation schedule like this.

Get-AzureRmAutomationSchedule -ResourceGroupName 'my-rg' -AutomationAccountName 'my-aa' -name 'General Name'

Now I get this output.

StartTime              : 9/30/2017 12:01:00 PM -06:00
ExpiryTime             : 12/31/9999 4:59:00 PM -07:00
IsEnabled              : True
NextRun                : 11/30/2017 12:01:00 PM -07:00
Interval               : 1
Frequency              : Month
MonthlyScheduleOptions : Microsoft.Azure.Commands.Automation.Model.MonthlyScheduleOptions
WeeklyScheduleOptions  : Microsoft.Azure.Commands.Automation.Model.WeeklyScheduleOptions
TimeZone               : America/Denver
ResourceGroupName      : my-rg
AutomationAccountName  : my-aa
Name                   : General Name
CreationTime           : 9/20/2017 10:26:26 AM -06:00
LastModifiedTime       : 9/20/2017 10:26:26 AM -06:00
Description            :

That on it’s own isn’t useful because it’s just showing me the type of object that’s there, but I can do something like this to get more meaningful output (for my purposes here).

Get-AzureRmAutomationSchedule -ResourceGroupName 'my-rg' -AutomationAccountName 'my-aa' -name 'General Name' | select Name,@{l='DaysOfMonth'; e={$_.MonthlyScheduleOptions.DaysOfMonth}}

And get output like this.

Name                     DaysOfMonth
----                     -----------
General Name             LastDay

There’s more elements in weekly and monthly schedule options than I’m using here, but this is just to highlight the behavior of the Get-AzureRmAutomationSchedule cmdlet. If you don’t specify a value for -Name parameter, then you won’t get all the information back about the schedules you’re seeing.

By the way, there are plenty of other AzureRM cmdlets that work this way. So, if you’re working with Azure in PowerShell and wondering why the AzureRM module doesn’t seem to be giving you all the data that it should, check for the presence of this quirk.

Read More

Referencing Non-String Hashtable Keys in PowerShell

Say you’ve got a hashtable with a bunch of data in it, but the key is not a string. How do you refer to specific items?

You can set something up to experiment with this with this code.

PS&gt; $a = @{}
PS&gt; 1..3 | % { $a.add($(New-Guid), $_) }

Declare $a as a new empty hashtable, and then add three items to it. The key is a GUID, and the value is just a number. You get something like this.

PS&gt; $a

Name                           Value
----                           -----
a2022422-ffe6-4291-a736-c1d... 1
33b8251c-8c09-433c-ae88-666... 3
4d9d41c1-8a0b-4326-ad59-164... 2

Now say you want to refer to the first item in the list whose key/GUID is a2022422-ffe6-4291-a736-c1de97720f25, in my example. You could try any of these.

PS&gt; $a.a2022422-ffe6-4291-a736-c1de97720f25
PS&gt; $a.'a2022422-ffe6-4291-a736-c1de97720f25'
PS&gt; $a['a2022422-ffe6-4291-a736-c1de97720f25']
PS&gt; $a.Item('a2022422-ffe6-4291-a736-c1de97720f25')

But none of these actually return any information. The problem is that the key is a GUID, not a string, but we’re trying to refer to it as a string. Instead, you have to treat it like a GUID.

PS&gt; $a[[guid]'a2022422-ffe6-4291-a736-c1de97720f25']

1

By casting the string as a GUID, you’re telling the hashtable that you’re not just looking for a string. This same thing works for other data types like integers.

Read More

Add A Work Note To A ServiceNow Incident With PowerShell

I have previously written about working with the ServiceNow API, and I’ve continued to use it since my last post on the topic. One of the things that I find myself doing a lot is using PowerShell to add a work note to an incident. Luckily, ServiceNow has an API that you can use to interact with it and do this (among many other things).

Since I know that all my information is stored in the Incident table, it’s not too many steps to get an incident out of ServiceNow if I have the incident number.

$user = $Credential.Username
$pass = $Credential.GetNetworkCredential().Password
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $user, $pass)))
$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add('Authorization',('Basic {0}' -f $base64AuthInfo))
$headers.Add('Accept','application/json')
$uriGetIncident = "https://$SubDomain.service-now.com/api/now/table/incident?sysparm_query=number%3D$SNIncidentNumber&amp;sysparm_fields=&amp;sysparm_limit=1"
$responseGetIncident = Invoke-WebRequest -Headers $headers -Method "GET" -Uri $uriGetIncident
$resultGetIncident = ($responseGetIncident.Content | ConvertFrom-Json).Result

Assuming I already created a credential object named $Credential to hold my ServiceNow creds, I can add do some encoding to assemble them in a way that I can add them to the header of the request I’m about to make. I’m doing that on the first three lines.

On lines 5 - 7, I’m constructing those headers. So far, I’m following all the PowerShell examples given in the ServiceNow documentation and are similar to my last post on using the ServiceNow API.

Line 9 is where I create the URI for the incident get request. You’ll notice I have a variable for both the subdomain (will be unique for your instance of ServiceNow) and the ServiceNow incident number.

Lines 10 and 11 get the incident and parse the results of my request.

Now I can add some work notes.

$workNotesBody = @"
{"work_notes":"$Message"}
"@
$uriPatchIncident = "https://$SubDomain.service-now.com/api/now/table/incident/$($resultGetIncident.sys_id)"
$null = Invoke-WebRequest -Headers $headers -Method "PATCH" -Uri $uriPatchIncident -body $workNotesBody

On lines 1 - 3, I’m making the body of my patch request, to say that I’m adding the value of $Message into the work_notes field of my incident. Line 5 is where I make the URI for this patch activity, using the sys_id that came out of the get query I performed earlier.

On line 5, I’m muting the output of the web request to add the work notes to the incident. I’m reusing the headers I set up for the get query.

Read More

PowerShell + DevOps Global Summit 2018 Tickets Are On Sale

Registration for the PowerShell + DevOps Global Summit just opened today. This thing sells out every year so now is the time to start getting approval to attend if you need it, and buy a ticket.

Check out the event brochure for info about the conference. You can use it as leverage to convince whoever needs convincing that you should go. The PowerShell + DevOps Global Summit speaker line up and session schedule is also up right now, and as you’ll see, it’s absolutely stacked. This is also a great chance to meet people who work at Microsoft on the PowerShell (and other) teams, as well as a bunch of MVPs at the top of this field. Make no mistake, this is a crazy good networking opportunity.

There are limited hotel discount codes available, and plane tickets will probably only rise in price as you wait, so get on it if you’re going to come!

Some of the sessions I’m most excited for are Kirk Munro’s Become a PowerShell Debugging Ninja, Warren Frame’s Connecting the Dots with PowerShell, Eli Hess’ PowerShell IoT, Ryan Coates Build Release Pipeline Model For Mere Mortals, Will Anderson’s Automate Problem Solving with PowerShell, Azure Automation and OMS, and of course the session that I’m presenting, A Crash Course in Writing Your Own PSScriptAnalyzer Rules.

It’s going to be really hard to go to a “bad” session, though. With this line up, it’s going to be impossible not to learn something valuable no matter which sessions you attend.

Hope to see you there!

Read More

Working With The PowerShell ActiveDirectory Module As A Non-Privileged User

As a best practice, as an administrator you should have separate accounts for your normal activities (emails, IM, normal stuff) and your administrative activities (resetting passwords, creating new mailboxes, etc.). It’s obviously best not to log into your normal workstation as your administrative user. You’re also absolutely not supposed to remote desktop into a domain controller (or another server) just to launch a PowerShell console, import the ActiveDirectory module, and run your commands. Here’s  better way.

We’re going to leverage the $PSDefaultParameterValues built-in variable which allows you to specify default values for cmdlets every time you run them.

First, set up a variable to hold your credentials.

PS&gt; $acred = Get-Credential -Message 'Admin creds'

Now, import the ActiveDirectory module.

PS&gt; Import-Module ActiveDirectory

And finally, a little something special.

PS&gt; $PSDefaultParameterValues += @{ 'activedirectory:\*:Credential' = $acred }

I’m adding a value to my $PSDefaultParameterValues variable. What I’m saying is for all the cmdlets in the ActiveDirectory module, set the -Credential parameter equal to the $acred variable that I set first.

Now when I run any commands using the ActiveDirectory module, they’ll run the the administrative credentials I supplied, instead of the credentials I’m logged into the computer with.

Read More

Using PowerShell To Split A String Without Losing The Character You Split On

Last week, I wrote a post on the difference between .split() and -split in PowerShell. This week, we’re going to keep splitting strings, but we’re going to try to retain the character that we’re splitting on. Whether you use .split() or -split, when you split a string, it takes that character and essentially turns it into the separation of the two items on either side of it. But, what if I want to keep that character instead of losing it to the split?

Well, we’re going to have to dabble in regular expressions. Before you run away screaming, as I know some people do when it comes to regex, let me walk you through this and see if you don’t mind dipping a toe in these waters.

In our scenario, I’ve got a filename and I’m going to split it based on the slashes in the path. Normally I’d get something like this.

PS&gt; $filename = get-item C:\temp\demo\thing.txt
PS&gt; $filename -split '\\'

C:
temp
demo
thing.txt

Notice how I had to split on “\”? I had to escape that backslash. We’re regexing already! Also notice that I lost the backslash on which I split the string. Now let’s do a tiny bit more regex in our split pattern to retain that backslash.

PS&gt; $filename -split '(?=\\)'

C:
\temp
\demo
\thing.txt

Look at that, we kept our backslash. How? Well look at the pattern we split on: (?=\). That’s what regex calls a “lookahead”. It’s contained in round brackets and the “?=” part basically means “where the next character is a “ and the “\” still means our backslash. So we’re splitting the string on the place in the string where the next character is a backslash. we’re effectively splitting on the space between characters.

NEAT! Now what if I wanted the backslash to be on the other side? That is, at the end of the string on each line instead of the start of the line after? No worries, regex has you covered there, too.

PS&gt; $filename -split '(?&lt;=\\)'

C:\
temp\
demo\
thing.txt

This is a “lookbehind”. It’s the same as a lookahead, except it’s looking for a place where the character to the left matches the pattern, instead of the character to the right. A lookbehind is denoted with the “?<=” characters.

There are plenty of resources online about using lookaheads and lookbehinds in regex, but if you’re not looking specifically for regex resources, you probably wouldn’t have found them. If PowerShell string splitting is what you’re after, hopefully you found this interesting.

Regex isn’t that scary, right?

Read More

What's the difference between -split and .split() in PowerShell?

Here’s a question I see over and over and over again: “I have a string and I’m trying to split it on this part, but it’s jumbling it into a big mess. What’s going on?” Well, there’s splitting a string in PowerShell, and then there’s splitting a string in PowerShell. Confused? Let me explain.

Say you have this string for our example.

PS&gt; $splitstring = 'this is an interesting string with the letters s and t all over the place'

Now let’s say you want to split it on all the “s” characters. You might do this and get these results.

PS&gt; $splitstring.split('s')

thi
 i
 an intere
ting
tring with the letter

 and t all over the place

That did exactly what we thought it would. It took our string and broke it apart on all the “s”’s. Now, what if I want to split it where there’s an “st”? There’s only two spots it should split: the “st” in “interesting” and in “string”. Let’s try the same thing we tried before.

PS&gt; $splitstring.split('st')


hi
 i
 an in
ere

ing

ring wi
h
he le

er

 and
 all over
he place

Well that ain’t right. What happened? If we look closely, we can see that our string was split anywhere that there was an “s” or a “t”, rather than where there was an “st” together.

.split() is a method that takes an array of characters and then splits the string anywhere it sees any of those characters.

-split is an operator that takes a pattern string (regular expression) and splits the string anywhere it sees that pattern.

Here’s what I should have done to split our string anywhere there’s an “st”.

PS&gt; $splitstring -split 'st'

this is an intere
ing
ring with the letters s and t all over the place

That looks more like we’re expecting.

Remember, .split() takes an array of characters, -split takes a string.

Read More

PowerShell Rules For Format-Table And Format-List

In PowerShell, when outputting data to the console, it’s typically either organized into a table or a list. You can force output to take either of these forms using the Format-Table and the Format-List cmdlets, and people who write PowerShell cmdlets and modules can take special steps to make sure their output is formatted as they desire. But, when no developer has specifically asked for a formatted output (for example, by using a .format.ps1xml file to define how an object is formatted), how does PowerShell choose to display a table or a list?

The answer is actually pretty simple and I’m going to highlight it with an example. Take a look at the following piece of code.

PS&gt; get-wmiobject -class win32_operatingsystem | select pscomputername,caption,osarch*,registereduser

PSComputerName  caption                         OSArchitecture registereduser
--------------  -------                         -------------- --------------
workingsysadmin Microsoft Windows 10 Enterprise 64-bit         ThmsRynr@outlook.com

I used Get-WmiObject to get some information about my operating system. I selected four properties and PowerShell decided to display a table. Now, let’s add another property to return.

PS&gt; get-wmiobject -class win32_operatingsystem | select pscomputername,caption,osarch*,registereduser,version


PSComputerName : workingsysadmin
caption        : Microsoft Windows 10 Enterprise
OSArchitecture : 64-bit
registereduser : ThmsRynr@outlook.com
version        : 10.0.14393

Whoa, now we get a list. What gives?

Well here’s how PowerShell decides, by default, whether to display a list or table:

  • If showing four or fewer properties, show a table
  • If showing five or more properties, show a list

That’s it, that’s how PowerShell decides by default whether to show you a list or table.

Read More

The Difference Between Get-Member and .GetType() in PowerShell

Recently, I was helping someone in a forum who was trying to figure out what kind of object their command was returning. They knew about the standard cmdlets people suggest when you’re getting started (Get-HelpGet-Member, and Get-Command), but couldn’t figure out what was coming back from a specific command.

In order to make this a more generic example, and to simplify it, let’s approach this differently. Say I have these two objects where one is a string and the other is an array of two strings.

PS&gt; $thing1 = 'This is an item'
PS&gt; $thing2 = @('This is another item','This is one more item')
PS&gt; $thing1; $thing2

The third line shows you what you get if you write these out to the screen.

This is an item
This is another item
This is one more item

It looks like three separate strings, right? Well we should be able to dissect these with Get-Member to get to the bottom of this and identify the types of objects these are. After all, one is a string and the other is an array, right?

PS&gt; $thing1 | Get-Member


   TypeName: System.String

Name             MemberType            Definition
----             ----------            ----------
Clone            Method                System.Object Clone()
&lt;output truncated&gt;

So far, so good. $thing1 is our string, so we’d expect the TypeName to be System.String. Let’s check the array.

PS&gt; $thing2 | Get-Member


   TypeName: System.String

Name             MemberType            Definition
----             ----------            ----------
Clone            Method                System.Object Clone()
&lt;output truncated&gt;

Dang, $thing2 is an array but Get-Member is still saying the TypeName is System.String. What’s going on?

Well, the key here is what we’re doing is writing the output of $thing2 into Get-Member. So the output of $thing2 is two strings, and that’s what’s actually hitting Get-Member. If we want to see what kind of object $thing2 really is, we need to use a method that’s built into every PowerShell object: GetType().

PS&gt; $thing2.GetType()

IsPublic IsSerial Name                                     BaseType
-------- -------- ----                                     --------
True     True     Object[]                                 System.Array

There you go. $thing2 is a System.Array object, just like we thought.

Get-Member is useful for exploring objects properties and methods, as well as their type. In this case, however, it was exploring the object that was being passed to it through the pipeline.

Read More

Dynamically Create Pester Tests For PowerShell

The Pester people don’t really recommend this, but, I find it can be really helpful sometimes. What I’m talking about is dynamically creating assertions inside of a Pester test using PowerShell. While I think you should strive to follow best practices, sometimes what’s best for you isn’t always a best practice, and as long as you know what you’re doing, I think you can get away with bending the rules sometimes. Don’t tell anyone I said that.

Say you had a requirement to make sure that a function you wrote performed math, correctly. Maybe it looks like this.

function Get-Square {
    param (
        [int]$Number
    )
    $result = $Number * $Number
    $result
}

This will just get the square of the number we pass it. Your test might look like this.

describe 'Get-Square' {
    it 'squares 1' {
        Get-Square 1 | Should Be 1
    }

    it 'squares 2' {
        Get-Square 2 | Should Be 4
    }

    it 'squares 3' {
        Get-Square 3 | Should Be 9
    }
}

This would work. It would test your function correctly, and give you all the feedback you expect. There’s another way to do this, though. Check out this next example.

describe 'Get-Square' {
    $tests = @(
        @(1,1),
        @(2,4),
        @(3,9)
    )
    foreach ($test in $tests) {
        it "squares $($test[0])" {
            Get-Square $test[0] | Should Be $test[1]
        }
    }
}

This particular example gets more complicated, but shows you what I’m talking about. $tests is an array of smaller arrays where the first number is the number to be squared, and the second number is the answer we expect. Then for each test (array in $tests), I’m generating a new it assertion. Neat, right?

Yes, in this particular situation, we ignored Pester test cases, which would have worked here too. This was just a silly example to show how you might tackle this problem differently, or in a situation where test cases wouldn’t work for you.

Read More

Piping PowerShell Output Into Bash

With Windows 10, you can install Bash on Windows. Cool, right? Having Bash on Windows goes a long way towards making Windows a more developer-friendly environment and opens a ton of doors. The one I’m going to show you today is more of a novelty than anything else, but maybe you’ll find something neat to do with it.

If you’ve been around PowerShell, you’re used to seeing the pipe character ( ) used to pass the output from one command into the input of another. What you can do now, kind of, is pass the output of a PowerShell command into the input of a Bash command. Here’s an example. Get ready for this biz.
Get-ChildItem c:\temp\demo | foreach-object { bash -c "echo $($_.Name) | awk /\.csv/" }

In my c:\temp\demo folder, I have three files, two of which are CSVs. In an attempt to be super inefficient, I am piping the files in that directory into a foreach-object loop and using Bash to tell me which ones end in .csv, using awk. This is hardly the best way to do this, but it gives you an idea of how you can start to intermingle these two shells.

Read More

How To List All The Shares On A Server Using PowerShell

There’s a few ways to get all of the shared folders on a server, but not all of them work for all versions of Windows Server. You can use the Get-SmbShare cmdlet, or you can make CIM/WMI do the work for you. I’ll show you what I prefer, though.

To use Get-SmbShare on a remote computer, you’ll create a new CIM session.

PS&gt; New-CimSession -ComputerName $computername -Credential $creds

Id           : 1
Name         : CimSession1
InstanceId   : 110928f2
ComputerName : computername
Protocol     : WSMAN

Then you can pass that CIM session to Get-SmbShare.

PS&gt; Get-SmbShare -CimSession $(get-cimsession -id 1)

Name     ScopeName Path                              Description     PSComputerName
----     --------- ----                              -----------     --------------
ADMIN$   *         C:\windows                        Remote Admin    comp
C$       *         C:\                               Default share   comp
D$       *         D:\                               Default share   comp
IPC$     *                                           Remote IPC      comp
print$   *         C:\windows\system32\spool\drivers Printer Drivers comp
Profiles *         D:\Profiles                                       comp
Transfer *         C:\Shares\Transfer                                comp

But what if the server is (heaven forbid!) older than Windows Server 2012R2? Well, you’d get an error telling you “get-cimclass : The WS-Management service cannot process the request. The CIM namespace win32_share is invalid.”. That won’t do.

Well, luckily for those older servers, you can use Get-WmiObject to retrieve this information.

PS&gt; Get-WmiObject -Class win32_share -ComputerName $oldComp -Credential $creds


Name         Path                                                  Description
----         ----                                                  -----------
ADMIN$       C:\windows                                            Remote Admin
C$           C:\                                                   Default share
D$           D:\                                                   Default share
IPC$                                                               Remote IPC

 

Read More

Get A ServiceNow User Using PowerShell

ServiceNow is a cloud computing company whose software is used for IT Service Management based on ITIL standards. They’ve got a bunch of different modules for managing problems and incidents, operations management, performance analytics, and more. You there some custom development you can do to modify their solutions or build your own. It’s pretty flexible, and we use it where I work.

I have been working with the ServiceNow API a lot lately. This week, I’m going to show you something pretty simple: Getting a ServiceNow user.

Let’s jump into some code first and I’ll break down what I’m doing.

$Credential = Get-Credential
$SubscriptionSubDomain = 'mysub'
$Username = 'thmsrynr'

$user = $Credential.Username
$pass = $Credential.GetNetworkCredential().Password
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $user, $pass)))

$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add('Authorization',('Basic {0}' -f $base64AuthInfo))
$headers.Add('Accept','application/json')

$uri = "https://$SubscriptionSubDomain.service-now.com/api/now/v1/table/sys_user?sysparm_query=user_name=$Username"

$response = Invoke-WebRequest -Headers $headers -Method "GET" -Uri $uri 
$result = ($response.Content | ConvertFrom-Json).Result

First, I’m compiling the authentication information and header info as per the ServiceNow documentation. This isn’t my favorite way of handling credentials, but it’s what the ServiceNow documentation recommends and, well, it works.

Next, I’m constructing my URI using a variable holding my subdomain and another variable for the username I’m interested in ($SubscriptionSubDomain and $Username respectively).

Then, I am invoking the web request to get the information about the user, and parsing the result. I can then use the $result variable later in my script.

This has been particularly helpful for me when I’m trying to figure out the sys_id (ServiceNow’s unique ID) for a specific user and all I know is their username.

Read More

Getting Started With Azure Automation (Pluralsight Course)

I try my best to make new technical posts on this blog every Wednesday morning. They vary in length, skill level, and sometimes even usefulness. Today I wanted to share that my first Pluralsight course was published last week: Getting Started with Azure Automation.

 

Pluralsight is a paid service but trials are available, and it’s a benefit of having an MSDN subscription. They’ve got thousands of hours of good stuff for people working in all areas of technology, including my new course.

 

My Getting Started with Azure Automation course will take you from zero knowledge to functionally useful in just over an hour. Please check it out and don’t hesitate to contact me with any questions or feedback.

 

As a Pluralsight author, I am compensated for creating courses so this is technically a sponsored post. I do, however, truly believe in their service overall, and think many people who read my blog may benefit from watching my course.

Read More

Quick Tip - Use PowerShell To See How Many Files Are In A Directory

Here’s a way to see how many files are in a directory, using PowerShell.

As you likely know, you can use Get-ChildItem to get all the items in a directory. Did you know, however, that you can have PowerShell quickly count how many files there are?

PS&gt; (Get-ChildItem -Path c:\temp\demo).count
3

I probably could have counted the files in this specific directory pretty easily myself, since there’s only 3 of them. If you want to see how many files are in an entire folder structure, use the -Recurse flag to go deeper.

You can do this with any output from a cmdlet when it’s returned in an array of objects. Check this out.

PS&gt; (Get-AdUser -filter "Name -like 'Thomas *'").count
7

In my test Active Directory, there are 7 AD users with a name that matches the pattern “Thomas *”.

Read More

Add A Column To A CSV Using PowerShell

Say you have a CSV file full of awesome, super great, amazing information. It’s perfect, except it’s missing a column. Luckily, you can use Select-Object along with the other CSV cmdlets to add a column.

In our example, let’s say that you have a CSV with two columns “ComputerName” and “IPAddress” and you want to add a column for “Port3389Open” to see if the port for RDP is open or not. It’s only a few lines of code from being done.

PS&gt; $servers = Import-Csv C:\Temp\demo\servers.csv

PS&gt; $servers

Name     IPAddress
----     ---------
server01 10.1.2.10
server02 10.1.2.11

Now, let’s borrow some code from my post on calculated properties in PowerShell to help us add this column and my post on seeing if a port is open using PowerShell to populate the data.

PS&gt; $servers = $servers | Select-Object -Property *, @{label = 'Port3389Open'; expression = {(Test-NetConnection -ComputerName $_.Name -Port 3389).TcpTestSucceeded}}

You can run $servers to see the if the new data shows up correctly (spoiler alert, it did), and then use Export-Csv to put the data into the same, or a new CSV file.

PS&gt; $servers | Export-Csv -Path c:\temp\demo\servers-and-port-data.csv -NoTypeInformation

Use the -Force flag if you’re overwriting an existing CSV.

Read More

Quick Tip - Diagnosing Slow PowerShell Load Times

I could write an entire book on “why does my PowerShell console take so long to load?” but I don’t want to write that book. Instead, here’s a way to make sure the reason your console is loading slowly isn’t because of something dumb.

When you launch PowerShell, one of the things that happens is that your profile is loaded. Your profile is basically its own script that runs to setup and configure your environment before you start using it. I use mine to define some custom aliases, functions, import some modules, and set my prompt up. You can see what your profile is doing by running notepad $profile. This will open your profile in notepad (but you can use the ISE or Visual Studio Code or Notepad++ etc. if you prefer).

There is more than one profile used by PowerShell depending on how you’re running PowerShell, and $profile will always refer to the one that’s currently applied to you. If you run that command above and are told that there’s no such file, it means don’t have anything configured in your PowerShell profile.

Keep in mind, there could be a lot of other reasons that your console loads slowly. This is just a quick way to clear out any dumb code from your profile.

Read More

Use Test-NetConnection in PowerShell to See If A Port Is Open

The days of using ping.exe to see if a host is up or down are over. Your network probably shouldn’t allow ICMP to just fly around unaddressed, and your hosts probably shouldn’t return ICMP echo request (ping) messages either. So how do I know if a host is up or not?

Well, it involves knowing about what your host actually does. What ports are supposed to be open? Once you know that, you can use Test-NetConnection in PowerShell to check if the port is open and responding on the host you’re interested in.

PS&gt; Test-NetConnection -ComputerName $computerName -Port 3389


ComputerName     : &lt;snip - name of the computer I'm testing&gt;
RemoteAddress    : &lt;snip - IP address of the computer I'm testing&gt;
RemotePort       : 3389
InterfaceAlias   : Ethernet
SourceAddress    : &lt;snip - my IP address&gt;
TcpTestSucceeded : True

Here I just checked if port 3389 (for RDP) is open or not. Looks like it is.

Read More

Use PowerShell To Find Out How Long It Is Until Christmas

It’s July at the time of this post, which means Christmas is right around the corner! Maybe not. How long is it until Christmas, anyway? Well, PowerShell can tell us if we get the date of Christmas and subtract today’s date from it.

PS&gt; (Get-Date 'December 25') - (Get-Date)


Days              : 202
Hours             : 15
Minutes           : 5
Seconds           : 13
Milliseconds      : 808
Ticks             : 175071138085639
TotalDays         : 202.628632043564
TotalHours        : 4863.08716904553
TotalMinutes      : 291785.230142732
TotalSeconds      : 17507113.8085639
TotalMilliseconds : 17507113808.5639

Only 17507113808.5639 more milliseconds until Christmas!

Read More

Calculated Properties in PowerShell

Most of the time, a PowerShell cmdlet will return all the information you need to work with it later in the pipeline. Sometimes, though, there’s some assembly required. What I mean, is maybe the cmdlet returned the information you need, but not in the format you want, or you wish you had some property multiplied by some other property. Let’s explore.

Say you ran Get-ChildItem to get some items in a directory, and you get something like the following.

PS&gt; Get-ChildItem c:\temp\demo


    Directory: C:\temp\demo


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----         6/5/2017   8:40 AM        1519200 thing.txt

One of the items is Length, which tells you the size of the file in bytes. What if I wanted that in kilobytes, though? Well, it’s not too hard. We’re going to use a calculated property, using Select-Object.

PS&gt; Get-ChildItem c:\temp\demo | Select-Object -Property Name, @{label = 'FileSize'; expression = { $_.Length/1KB }}

Name        FileSize
----        --------
thing.txt 1483.59375

So I’m selecting two properties. One is Name, and the other is a calculated property. A calculated property is basically a hashtable with two items in it: a label, which is the name of our calculated property, and expression, which is the scriptblock that defines our calculation.

In this case, the name of my calculated property is FileSize, and the calculation is “the length of the item, divided by 1KB”. In these calculations, $_ basically refers to “the item we’re looking at”.

Read More

Using Get-Member to Explore Objects

Last week, I put out a post about using Select-Object to explore PowerShell objects. This week, I am going to quickly cover using Get-Member to do the same.

Let’s say you’re using Get-CimInstance to get information about the operating system. You might do something like this.

PS&gt; Get-CimInstance -ClassName win32_operatingsystem

SystemDirectory     Organization          BuildNumber RegisteredUser        SerialNumber            Version
---------------     ------------          ----------- --------------        ------------            -------
C:\windows\system32 &lt;snip&gt;                14393       &lt;snip&gt;                &lt;snip&gt;                  10.0.14393

As is the case with our example last week, there’s more stuff returned and available to us than what is returned by default. Let’s use Get-Member to see what it all is.

PS&gt; Get-CimInstance -ClassName win32_operatingsystem | get-member


   TypeName: Microsoft.Management.Infrastructure.CimInstance#root/cimv2/Win32_OperatingSystem

Name                                      MemberType  Definition
----                                      ----------  ----------
Clone                                     Method      System.Object ICloneable.Clone()
Dispose                                   Method      void Dispose(), void IDisposable.Dispose()
Equals                                    Method      bool Equals(System.Object obj)
GetCimSessionComputerName                 Method      string GetCimSessionComputerName()
GetCimSessionInstanceId                   Method      guid GetCimSessionInstanceId()
GetHashCode                               Method      int GetHashCode()
GetObjectData                             Method      void GetObjectData(System.Runtime.Serialization.SerializationInfo info, System.Runtime.Serialization.StreamingCon... GetType                                   Method      type GetType()
ToString                                  Method      string ToString()
BootDevice                                Property    string BootDevice {get;}
BuildNumber                               Property    string BuildNumber {get;}
BuildType                                 Property    string BuildType {get;}
Caption                                   Property    string Caption {get;}
CodeSet                                   Property    string CodeSet {get;}
CountryCode                               Property    string CountryCode {get;}
CreationClassName                         Property    string CreationClassName {get;}
CSCreationClassName                       Property    string CSCreationClassName {get;}
CSDVersion                                Property    string CSDVersion {get;}
CSName                                    Property    string CSName {get;}
CurrentTimeZone                           Property    int16 CurrentTimeZone {get;}
DataExecutionPrevention_32BitApplications Property    bool DataExecutionPrevention_32BitApplications {get;}
DataExecutionPrevention_Available         Property    bool DataExecutionPrevention_Available {get;}
DataExecutionPrevention_Drivers           Property    bool DataExecutionPrevention_Drivers {get;}
DataExecutionPrevention_SupportPolicy     Property    byte DataExecutionPrevention_SupportPolicy {get;}
Debug                                     Property    bool Debug {get;}
Description                               Property    string Description {get;set;}
Distributed                               Property    bool Distributed {get;}
&lt;output truncated&gt;

Holy smokes, there’s a lot of stuff there. As with Select-Object, you can see all the different properties that exist in this object. The big difference here is that you can see all the different methods this object comes with, too. You could store this information in a variable and then invoke the .HashCode() on it and see the output of that, like this.

PS&gt; $osInfo = Get-CimInstance -ClassName win32_operatingsystem

PS&gt; $osInfo.GetHashCode()
57422975

There’s a lot of examples of methods that are more interesting than this, but you can play with it and make this work for you.

 

Read More

Using Select-Object to Explore Objects

When you’re first getting started with PowerShell, you may not be aware that sometimes when you run a command to get data, the information returned to the screen is not ALL the information that the command actually returned.

Let me clarify with an example. If you run the Get-ChildItem cmdlet, you’ll get a bit of information back about all the files in whichever directory you specified.

PS&gt; Get-ChildItem c:\temp\demo


    Directory: C:\temp\demo


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----         6/5/2017   8:11 AM              0 thing.txt

This is not all the data that got returned, though. There are far more properties than just Mode, LastWriteTime, Length and Name to be examined. What are they? Well, we can pipe this cmdlet into Select-Object -Property * to see them.

PS&gt; Get-ChildItem c:\temp\demo | Select-Object -Property *


PSPath            : Microsoft.PowerShell.Core\FileSystem::C:\temp\demo\thing.txt
PSParentPath      : Microsoft.PowerShell.Core\FileSystem::C:\temp\demo
PSChildName       : thing.txt
PSDrive           : C
PSProvider        : Microsoft.PowerShell.Core\FileSystem
PSIsContainer     : False
Mode              : -a----
VersionInfo       : File:             C:\temp\demo\thing.txt
                    InternalName:
                    OriginalFilename:
                    FileVersion:
                    FileDescription:
                    Product:
                    ProductVersion:
                    Debug:            False
                    Patched:          False
                    PreRelease:       False
                    PrivateBuild:     False
                    SpecialBuild:     False
                    Language:

BaseName          : thing
Target            : {}
LinkType          :
Name              : thing.txt
Length            : 0
DirectoryName     : C:\temp\demo
Directory         : C:\temp\demo
IsReadOnly        : False
Exists            : True
FullName          : C:\temp\demo\thing.txt
Extension         : .txt
CreationTime      : 6/5/2017 8:11:04 AM
CreationTimeUtc   : 6/5/2017 2:11:04 PM
LastAccessTime    : 6/5/2017 8:11:04 AM
LastAccessTimeUtc : 6/5/2017 2:11:04 PM
LastWriteTime     : 6/5/2017 8:11:04 AM
LastWriteTimeUtc  : 6/5/2017 2:11:04 PM
Attributes        : Archive

Look at all that goodness. You can select specific properties by replacing the star with the names of the properties you want to see.

PS&gt; Get-ChildItem c:\temp\demo | Select-Object -Property Name, Attributes, IsReadOnly

Name      Attributes IsReadOnly
----      ---------- ----------
thing.txt    Archive      False

Happy scripting!

Read More

Can PowerShell Parameters Belong To Multiple Parameter Sets?

Say you’ve got a function that takes three parameters: Username, ComputerName and SessionName, but you don’t want someone to use ComputerName and SessionName at once. You decide to put them in separate parameter sets. Awesome, except you want Username to be a part of both parameter sets and it doesn’t look like you can specify more than one.

This will generate an error:

function Do-Thing {
    [CmdletBinding()]
    param (
    [Parameter( ParameterSetName = 'Computer','Session' )][string]$Username,
    [Parameter( ParameterSetName = 'Computer' )][string]$ComputerName,
    [Parameter( ParameterSetName = 'Session' )][PSSession]$SessionName
    )
# Other code
}

So how do you make a parameter a member of more than one parameter set? You need more [Parameter()] qualifiers.

function Do-Thing {
    [CmdletBinding()]
    param (
    [Parameter( ParameterSetName = 'Computer' )]
    [Parameter( ParameterSetname = 'Session' )]
    [string]$Username,

    [Parameter( ParameterSetName = 'Computer' )][string]$ComputerName,
    [Parameter( ParameterSetName = 'Session' )][PSSession]$SessionName
    )
# Other code
}

They chain together and you now $Username is a part of both parameter sets.

Read More

Connecting to Exchange Online Using Multi-Factor Authentication via PowerShell

Using PowerShell to manage your Microsoft cloud services like Exchange Online is awesome. Using multi-factor authentication (MFA) is also awesome. For some reason, using the two together is not awesome. Many of the Microsoft docs on this seem to suggest you just perform all your administrative tasks from a shell that you launch entirely separately from a normal PowerShell console. I would rather be able to connect to Exchange Online using MFA via PowerShell through a normal console, or as part of another tool. Let me show you how.

The first thing you’ll need to do is install the tool at this page. This will give you all the tools and libraries you need to install to connect to Exchange Online using MFA via PowerShell, including that special, magic console. Now that you have the tools installed, you can use this snippet to connect from a normal PowerShell console or from within another PowerShell-based tool.

$modules = @(Get-ChildItem -Path "$($env:LOCALAPPDATA)\Apps\2.0" -Filter "Microsoft.Exchange.Management.ExoPowershellModule.manifest" -Recurse )
$moduleName =  Join-Path $modules[0].Directory.FullName "Microsoft.Exchange.Management.ExoPowershellModule.dll"
Import-Module -FullyQualifiedName $moduleName -Force
$scriptName =  Join-Path $modules[0].Directory.FullName "CreateExoPSSession.ps1"
. $scriptName
$null = Connect-EXOPSSession
$exchangeOnlineSession = (Get-PSSession | Where-Object { ($_.ConfigurationName -eq 'Microsoft.Exchange') -and ($_.State -eq 'Opened') })[0]

On lines 1 and 2, I’m getting the location of the different tools and libraries that we installed earlier. Once I find the ExoPowerShellModule.dll, I can import it like any other module, except I’m specifying the full path, on line 3.

Lines 4 and 5 are where I find and dot source CreateExoPSSession.ps1 which is the script that contains the Connect-EXOPSSession cmdlet (which I’d be remiss if I didn’t mention violates the PowerShell naming standards created by the community and advertised by Microsoft). That cmdlet will trigger a login process that includes MFA, similar to how Login-AzureRmAccount works.

Finally on lines 6 and 7, I’m creating a new session and then assigning it to a variable called $exchangeOnlineSession. Then I can import that session and I’ll be away to the races.

It’s not as convenient or straightforward as connecting without MFA, but it’s definitely safer.

Read More

Custom PSScriptAnalyzerRule - Function Capitalization

I’ve got a number of custom PSScriptAnalyzer rules that I sometimes run. A little while ago I uploaded them to GitHub to share with others. Today I’m going to walk you through the AvoidImproperlyCapitalizedFunctionNames rule I wrote.

I wrote documentation (and tests) for these with the intention of someday making a pull request to PSSA to add these rules, but PSSA is not currently set up to include script-based rules. Here’s the description of my rule, from the documentation I wrote.

According to the PowerShell Practice and Style guide's section on capitalization conventions (community developed, but referencse Microsoft's published document on the same for the .NET framework), it is best practice to use PascalCase for functions inside of modules and scripts. This means that that one should never see adjacent capital letters in a Function name. An exception exists for two letter acronyms only like "VM" for Virtual Machine or "PS" for PowerShell (ex: Get-PSDrive). This should not extend to compound acronyms like Azure Resource manager's "RM" meets Virtual Machine "VM" in Start-AzureRmVM. Accordingly, this rule warns on instances where four or more adjacent capital letters are found in a function name.

With this description, it’s no surprise that my rule will identify functions with 4 or more adjacent capital letters. It’s not a perfect solution, because people could still implement things in all lowercase, or with other improperly capitalized names, but it’s a good start.

function Test-FunctionCasing {
    [CmdletBinding()]
    [OutputType([PSCustomObject[]])]
    param (
        [Parameter(Mandatory)]
        [ValidateNotNullOrEmpty()]
        [System.Management.Automation.Language.ScriptBlockAst]$ScriptBlockAst
    )

    process {
        try {
            $functions = $ScriptBlockAst.FindAll( { $args[0] -is [System.Management.Automation.Language.FunctionDefinitionAst] -and  
                $args[0].Name -cmatch '[A-Z]{4,}' }, $true )
            foreach ( $function in $functions )
            {
                [PSCustomObject]@{
                    Message  = 'Avoid function names with more than 3 capital letters in a row in their name'
                    Extent   = $function.Extent
                    RuleName = 'PSAvoidImproperlyCapitalizedFunctionNames'
                    Severity = 'Warning'
                }
            }
        }
        catch {
            $PSCmdlet.ThrowTerminatingError( $_ )
        }
    }
}

Custom PSSA rules need to take some sort of AST object, and mine takes a ScriptBlockAST so it can go through all the declared functions in that AST. Line 12 will get all the function definitions with names that have 4 or more adjacent capital letters. For each of those, I return a PSSA warning about violating the naming convention.

Read More

My Demonstration Prompt

Recently, I have found myself doing a lot of CLI PowerShell demos. Normally, I have a prompt that uses Joel Bennet’s PowerLine module and looks like this.

In my opinion, it’s pretty cool looking, and it gives me a bunch of useful information including the Get-History ID of the line that I ran, the nested prompt level, current drive, the present working directory, the time the last command took to run and whether it was successful, and the current time.

This is way too much information for a regular demo, and ends up in me answering questions about my prompt and explaining it for 5 minutes which eats up valuable demo time. Here’s my new demo prompt.

Not much explaining to do here. Here’s the code I have in my profile to make it happen.

function Invoke-DemoPrompt {
    $demo = 'function prompt {"I $([char]0x1b)[0;31m$([char]9829) $([char]0x1b)[0;0mPS&gt; "} clear-host'
    Invoke-Expression $demo
}

Shout out to Joel and Brandon in the PowerShell Slack for working this one out. It’s pretty simple. $demo is a string that redefines my prompt function and its invoked.

Read More

How To Retrieve A Certificate From Azure Key Vault Via PowerShell

So, you’ve got a certificate stored in Azure Key Vault that you want to download with PowerShell and use on a computer, or some hosted service. How do you get it and actually use it? Well, here, I’ll show you.

First, you’ve got to have the Azure PowerShell tools installed and be logged into Azure (or be running in a way where you’re already authenticated, like in Azure Automation).

Install-Module -Name AzureRm -Repository PSGallery -Scope CurrentUser -Force
Import-Module AzureRm
Login-AzureRmAccount

Next, it’s time to download the certificate. There are some Azure Key Vault cmdlets built in which, helpfully, do not follow the standard AzureRm naming scheme.

$cert = Get-AzureKeyVaultSecret -VaultName 'My-Vault' -Name 'My-Cert'

Now, we have to convert the SecretValueText property to a certificate.

$certBytes = [System.Convert]::FromBase64String($cert.SecretValueText)
$certCollection = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2Collection
$certCollection.Import($certBytes,$null,[System.Security.Cryptography.X509Certificates.X509KeyStorageFlags]::Exportable)

We can convert the SecretValueText to bytes, and use the X509Certificate2Collection class to convert those bytes to a certificate.

Next, we want to write the certificate to a pfx file on a disk somewhere (preferably to a temp location you can clean up later in the script).

$protectedCertificateBytes = $certCollection.Export([System.Security.Cryptography.X509Certificates.X509ContentType]::Pkcs12, $password)
$pfxPath = "D:\a\1\temp\ThomasRayner-export.pfx"
[System.IO.File]::WriteAllBytes($pfxPath, $protectedCertificateBytes)

The first line here exports the certificate and protects it with a password, but where did that come from?! Then it writes the protected bytes to a path on the file system.

So where did that password come from? I’m actually storing that in the Azure Key Vault, too.

$password = (Get-AzureKeyVaultSecret -VaultName 'My-Vault' -Name 'My-PW').SecretValueText
$secure = ConvertTo-SecureString -String $password -AsPlainText -Force

Now, I can either refer to that pfx file, or I can import it like this.

Import-PfxCertificate -FilePath "D:\a\1\temp\ThomasRayner-export.pfx" Cert:\CurrentUser\My -Password $secure

Make sure you clean up your certs after you’re done!

Read More

Quick Tip - Using Variables In ActiveDirectory Filters

If you work with the ActiveDirectory PowerShell module, you’ve probably used the -filter parameter to search for accounts or objects in Active Directory. You’ve probably wanted to use variables in those filters, too.

Say you have a command from something like an remote Exchange management shell, that returned an object that includes a username (called Alias in this example).

$person = (Get-Mailbox ThmsRynr).Alias

And let’s use that in an ActiveDirectory command. Ignoring the fact that you could find the account that has this username without using a filter, let’s see how you would use it in a filter.

You might try this.

Get-AdUser -Filter "SamAccountName -eq $person"

But you’d get errors.

Get-AdUser : Error parsing query: 'SamAccountName -eq ThmsRynr' Error Message: 'syntax error' at position: '20'.
At line:1 char:1
+ Get-AdUser -Filter "SamAccountName -eq $person"
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ParserError: (:) [Get-ADUser], ADFilterParsingException
    + FullyQualifiedErrorId : ActiveDirectoryCmdlet:Microsoft.ActiveDirectory.Management.ADFilterParsingException,Microsoft.ActiveDirectory.Management.Commands.GetADUser

That’s because the filter can’t handle your variable that way. To use a variable in an ActiveDirectory cmdlet filter, you need to wrap the filter in curly braces.

Get-AdUser -Filter {SamAccountName -eq $person}

And you get your results!

DistinguishedName : CN=Thomas Rayner,OU=Users,DC=lab,DC=workingsysadmin,DC=com
Enabled           : True
GivenName         : Thomas
Name              : Thomas Rayner
ObjectClass       : user
ObjectGUID        : &lt;snip&gt;
SamAccountName    : TFRayner
SID               : &lt;snip&gt;
Surname           : Rayner
UserPrincipalName : ThmsRynr@outlook.com

Pretty easy fix for a pretty silly issue.

Read More

Find Users Who Are Allowed To Have No Password Using PowerShell

You can use the UserAccountControl property of an Active Directory user object to enable and disable all kinds of neat functionality: https://support.microsoft.com/en-ca/kb/305144. One of the things you can enable is for a user to have no password (bit in the 32 position).

While this only impacts users who connect to the console, and it doesn’t mean that a user doesn’t have a password (just that they might), it’s pretty bad to leave that enabled for any users you’ve got.

Here’s an easy one-liner to get a list of users with this problem.

get-aduser -filter "useraccountcontrol -band 32" -properties useraccountcontrol

This shows you all the users in your domain whose password not required flag is set.

Here’s an easy way to fix it indiscriminately! Pipe the last command into…

 | foreach-object { Set-ADAccountControl $_.samaccountname -PasswordNotRequired $false }

 

Read More

How To Tell If The Verbose Parameter Of A Function Is From [CmdletBinding()] Or Manually Added

Pardon the long title. I had a task recently to go through a big folder full of scripts written by random people with equally random skill levels. Lots of the scripts had a -Verbose parameter, but they weren’t all done correctly.

Some scripts correctly included the [CmdletBinding()] line above the param() block. Some just had a [Switch]$Verbose parameter (wrong). Others had both (double wrong, script won’t even run).

Consider the following three functions, which illustrate the three categories I was dealing with.

function do-verbose1 { param([switch]$Verbose) @{} + $psboundparameters }                  
function do-verbose2 { [cmdletbinding()] param() @{} + $psboundparameters }                
function do-verbose3 { [cmdletbinding()] param([switch]$Verbose) @{} + $psboundparameters }

The first one is bad, the second one is good, the third one is double bad.

Here’s a quick way you can check the scripts using PowerShell so you don’t have to open them all up.

2016-12-06-09_41_54-powershell

As you can see, the first one has a parameter of -Verbose so Get-Help will show you info about it. The second one returns nothing. The third one returns an error.

For checking real scripts that have other parameters, you should pipe the results into where-object { $_.Name -eq ‘Verbose’ } to eliminate other parameters from being returned.

Read More

Honorary Scripting Guy Award

Yesterday, Microsoft’s Ed Wilson announced the Honorary Scripting Guys for 2016. I am honored and very proud to be the newest Honorary Scripting Guy, joining this year’s repeat winners: Sean Kearney, Teresa Wilson, and Will Anderson.

The Hey, Scripting Guy! blog is a resource that was an enormous part of my self-learning journey when I first got started with PowerShell, just as I am sure it was for you. Just having the opportunity to write posts and share information on HSG is a huge privilege. I still find to be a surreal experience every time I see my content go up. My HSG posts are tagged with my name, in case you want to check them out.

Earlier this month, Ed and his wife, Teresa, announced their upcoming retirement in March. I’d like to thank them both so much for their immeasurable, phenomenal contributions to PowerShell and the community. Ed and Teresa, we are going to miss you both tremendously. I hope retirement treats you both excellently, as you more than well deserve.

Read More

Invoking Pester and PSScriptAnalyzer Tests in Hosted VSTS

Overview

Pester and PSScriptAnalyzer are both fundamental tools for testing the effectiveness and correctness of PowerShell scripts, modules, and other PowerShell artifacts. While it is relatively convenient and straightforward to run these tools on a local development workstation, and even on owned/on-prem testing servers, it is somewhat more complicated to execute these tests in your own Microsoft-hosted Visual Studio Team Services environment.

Pester is an open source domain specific language developed originally by Dave Wyatt, which enjoys contributions from a variety of prominent members of the PowerShell community, as well as Microsoft employees on the PowerShell product team. Microsoft is a big enough fan of Pester that it comes with Windows 10, and reference it frequently in talks and written material. Pester is used for PowerShell unit testing.

PSScriptAnalyzer is a static code checker for PowerShell modules and scripts that checks the quality of code by comparing it against a set of rules. The rules are based on PowerShell best practices identified by the PowerShell team at Microsoft and the community. It is shipped with a collection of built-in rules but supports the ability to include or exclude specific rules, and also supports custom rule definitions. PSScriptAnalyzer is an open source project developed originally by the PowerShell team at Microsoft.

Lots of DevOps teams use the above tools together, along with their internally generated standards and style guide, to ensure that PowerShell code that is released into any environment meets their standards. By using Pester to ensure that a piece of code performs the tasks required, using PSScriptAnalyzer to inspect code for general best practice violations, and using a peer review process to validate that code conforms to our standards and style guidelines, you can rigorously test and ensure the quality and functionality of all PowerShell code that you produce.

As part of a PowerShell Release Pipeline, you may store your code in the source control portion of VSTS, hosted by Microsoft. I’d suggest you use the automated build and release components of VSTS to execute a series of tasks before deploying, and to deploy PowerShell code. Two of these tasks are running Pester tests and PSScriptAnalyzer. As a standard, don’t release builds if any part of either of these two tests fail.

In previous versions of VSTS, the hosted build service ran PowerShell 4.x. Because installing modules from the PowerShell Gallery (to get Pester and PSScriptAnalyzer module files so they may be run) requires PowerShell 5.0 or higher, it was necessary to use a third-party configured build step or perform some other hijinks that possibly compromised the integrity of a build. Now that VSTS runs PowerShell 5.0, we can run Pester, PSScriptAnalyzer, and many other helpful modules without exporting them with our other code, or using third-party build steps.

Prerequisites

Before following the steps in this guideline, there are several prerequisites and assumptions regarding access, knowledge, and already produced artifacts.

  1. Access to VSTS and permission to create builds in whichever area you are working in
  2. Know how to use Pester and PSScriptAnalyzer on local workstation
  3. Have produced Pester tests, and if applicable, custom PSScriptAnalyzer rule files
  4. Have a PowerShell script or module to test

Preparing Artifacts

PowerShell scripts and modules should be stored in a separate file from the tests that are run to validate their functionality. Typically, within a root folder, you will have a ModuleName (or ScriptName) folder which contains all of the .ps1, .psm1, .psd1, .dll, etc. files and artifacts that must be deployed for the code to be functional. Since tests are not deployed along with the rest of the functional artifacts, they should be stored in a Tests folder in the root module, at the same level as the ModuleName folder.

All of the above should be saved in VSTS source control.

The execution script (Invoke-Test.ps1)

It is assumed, above, that you have written Pester tests for your code, that the Pester tests have gone through peer review, and were determined to be complete and thorough. It is also assumed that they are stored as described in the standards and style guideline document.

Although the test results from Pester and PSScriptAnalyzer are separate and independent, because of the overhead of loading all of the required modules, it makes more sense to simply use one script to load the required modules and us it to call all of the different tests.

The script inside Invoke-Test.ps1 should look something like this.

$ErrorActionPreference = 'stop'
Install-PackageProvider -Name Nuget -Scope CurrentUser -Force -Confirm:$false
Install-Module -Name Pester -Scope CurrentUser -Force -Confirm:$false
Install-Module -Name PSScriptAnalyzer -Scope CurrentUser -Force -Confirm:$false
Import-Module Pester
Import-Module PSScriptAnalyzer
Invoke-Pester -OutputFile 'PesterResults.xml' -OutputFormat 'NUnitXml' -Script '.\Tests\Set-E.tests.ps1'
Invoke-Pester -OutputFile 'PSSAResults.xml' -OutputFormat 'NUnitXml' -Script '.\Tests\PSSA.tests.ps1'

Broken down line by line, the script performs the following tasks.

  1. Sets the ErrorActionPreference to stop. In this instance, we want any error thrown to be a terminating error and setting the ErrorActionPreference is the most convenient way to achieve this (it isn’t 100% effective, but works for this purpose). Normally, we wouldn’t want to perform such a significant change to a user’s working environment, but VSTS testing environments are ephemeral and therefore won’t be around after we’re done testing to experience any consequences.
  2. Installs Nuget as a package provider. The following two lines install modules from the PowerShell Gallery (powershellgallery.com) and the first time you try to do that, PowerShell will prompt you to accept the installation of Nuget. In the non-interactive hosted VSTS test environment this is not possible and so line number 2 performs this task proactively.
  3. Installs Pester from the PowerShell Gallery. This has to be for the scope CurrentUser because the test user running the code is not an administrative one.
  4. Installs PSScriptAnalyzer from the PowerShell Gallery. This also needs to be for the CurrentUser scope for the same reason as line 3.
  5. Imports the Pester module into the test user’s session.
  6. Imports the PSScriptAnalyzer module into the test user’s session.
  7. Runs the Pester test associated with this script or module and outputs the results in an XML document formatted as NUnitXML so VSTS can pick it up later.

You may have several different Pester tests for a script or module and so you may want to add some logic to this line to loop through a bunch of different tests, or simply repeat the same line more than once. It is important that you generate a unique XML file for each test you run to avoid overwriting the results of one test with another.

  1. Runs the PSScriptAnalyzer test. This looks like another Pester test, because it technically is. We use Pester as sort of a wrapper around a PSScriptAnalyzer test.

By default, PSScriptAnalyzer will dump its results to a user’s console window, because it’s made primarily to be an interactive tool. There are different solutions for using PSScriptAnalyzer as an unattended solution, and this is one of them.

Pester tests (*.tests.ps1)

Testing logic and standards are not covered at this time in this guide. There are numerous PluralSight courses, books, and online resources for learning proper testing methodology and learning how Pester works. Keep in mind while writing Pester tests that the module you are testing is in a different location than the test is located (if you are following the recommended standards and style). So, to dot-source a script within a test, you may need to reference a location like ..\ScriptName\Script-Name.ps1.

Confirm the functionality of your tests on your local workstation before following the steps in this guide. It is much easier to troubleshoot a faulty test in your local environment than it is to do so in a hosted VSTS build environment.

PSScriptAnalyzer tests (PSSA.tests.ps1)

The script for running PSScriptAnalyzer in hosted VSTS is more clearly defined than the Pester tests, addressed above. This guide does not cover the inclusion or exclusion of specific rules, or the use of custom rules. The majority of PowerShell code tested will be well served by the standard rule configuration that comes with PSScriptAnalyzer. If you are looking to use custom rule sets, it is assumed that you will be capable of altering the below script to suit your unique needs.

The Pester test that runs PSScriptAnalyzer testing should look something like this.

Describe 'Testing against PSSA rules' {
       Context 'PSSA Standard Rules' {
        $analysis = Invoke-ScriptAnalyzer -Path '..\ScriptName\Set-E.ps1'
        $scriptAnalyzerRules = Get-ScriptAnalyzerRule

        forEach ($rule in $scriptAnalyzerRules) {
            It "Should pass $rule" {
                If ($analysis.RuleName -contains $rule) {
                    $analysis |
                         Where RuleName -EQ $rule -outvariable failures |
                         Out-Default
                    $failures.Count | Should Be 0
                }
            }
        }
    }
}

The basic logic of this Pester test is that it performs an Invoke-ScriptAnalyzer on the script we are interested in testing (this may need to be adjusted for your purposes to include all of the files associated with a module, etc.), and examines the results. The script gets a list of all of the PSScriptAnalyzer rules, writes a test for each rule, and if the analysis contains any of the rules, the test for that specific rule is violated. Running the tests this way allows us to export granular results that indicates which PSScriptAnalyzer rule was broken from within VSTS.

Configuring the VSTS Build

Normally when you configure a VSTS build for a PowerShell script or module, you will also have steps for signing the artifacts, some other tests for business validation, and perhaps some other preparation for a VSTS release. In this guideline, we are concerned only with the Pester and PSScriptAnalyzer testing and so this build will look otherwise incomplete.

Steps

  1. Create a new build in the appropriate folder
  2. Add two build steps
    1. Run a PowerShell script
    2. Publish test results
  3. Configure the “Run a PowerShell script” step
    1. Give a more meaningful name such as “Run Pester & PSSA Tests”
    2. Type: File Path
    3. Script Path: Identify the Invoke-Test.ps1 file which operates as per above section
    4. Arguments: Leave blank
    5. Advanced
      1. Working folder: Change to the root folder for the module or script (Important)
      2. Fail on Standard Error: Checked
    6. Control Options
      1. Enabled: Checked
      2. Continue on error: Checked
    7. Always run: Checked
    8. Timeout: 0
  1. Configure the “Publish test results” step
    1. Test Result Format: NUnit
    2. Test Results Files: **/*Results.xml (Important: whatever this pattern is, all the XML documents you configure in the last few lines of Invoke-Test.ps1 need to match the pattern)
    3. Merge Test Results: Unchecked
    4. Test Run Title: Leave blank
    5. Advanced
      1. Platform: Leave blank
      2. Configuration: Leave blank
      3. Upload Test Attachments: Checked
      4. Control Options
        1. Enabled: Checked
        2. Continue on error: Unchecked
    6. Always run: Checked
    7. Timeout: 0
  1. Save the build with a meaningful name
  2. Queue a new build to test your work

Looking at Test Results

After a build has run, if you properly configured the tests as described above, you will be able to view the test results directly in VSTS. Follow these steps to access test results.

  1. Open VSTS
  2. Enter the Build & Release area
  3. Navigate to and click the build you are interested in
  4. You will see a list of recently completed builds in the summary pane, click the interesting one
  5. Click Tests

Here, you’ll see a summary of all of the tests than ran on your code. You’ll see the total tests, failed tests, pass percentage, and how long testing took. You can configure this view a bit, but to see more in depth information about a specific test, click it. All tests are labeled as Pester tests, because they technically are (recall, we used Pester to identify PSScriptAnalyzer failures) (opens in a new window or tab).

[caption id=”attachment_423” align=”alignnone” width=”750”] Figure 1 Viewing the PSScriptAnalyzer results for a test script[/caption]

Here, you’ll see a more detailed summary of the results of the specific test. Click on Test Results to see more detailed results. You can scroll through the list and see specifically which tests failed.

[caption id=”attachment_424” align=”alignnone” width=”411”] Figure 2 Observing a specific failed test among passed tests[/caption]

Double clicking on a specific test that passed or failed will bring you to more detailed information, but what you probably actually want is the raw console output from the test. The raw console output is the only place you can see the line in your script or module that failed the test. What you see in the Stack Trace screen in the detailed view, is the line in the test that failed, not the line in the script that failed the test.

Close the tab that was opened to view detailed test results and you should return to the Test Summary screen. Click on “Download all logs as zip” button, and you can examine the raw console output.

[caption id=”attachment_425” align=”alignnone” width=”584”] Figure 3 Downloading the raw console output from a test[/caption]

In the .zip that you save, there is a Build folder. Open it and you will see .txt files containing the raw console output of the build, broken down by build step. Open the file for running the tests and you can see exactly what caused a specific test to fail.

[caption id=”attachment_426” align=”alignnone” width=”950”] Figure 4 Viewing the raw console output of a test and observing a PSScriptAnalyzer rule violation[/caption]

You can scroll through this output and see the same information for the other tests you ran. You may also wish to look through the other .txt files if you believe there may be errors in other parts of the build that could have generated raw console output.

Read More

Does A String Start Or End In A Certain Character?

Can you tell in PowerShell if a string ends in a specific character, or if it starts in one? Of course you can. Regex to the rescue!

It’s a pretty simple task, actually. Consider the following examples

'something\' -match '.+?\\$'
#returns true

'something' -match '.+?\\$'
#returns false

'\something' -match '^\\.+?'
#returns true

'something' -match '^\\.+?'
#returns false

In the first two examples, I’m checking to see if the string ends in a backslash. In the last two examples, I’m seeing if the string starts with one. The regex pattern being matched for the first two is .+?\$ . What’s that mean? Well, the first part .+? means “any character, and as many of them as it takes to get to the next part of the regex. The second part <strong>\</strong> means “a backslash” (because \ is the escape character, we’re basically escaping the escape character. The last part $ is the signal for the end of the line. Effectively what we have is “anything at all, where the last thing on the line is a backslash” which is exactly what we’re looking for. In the second two examples, I’ve just moved the <strong>\</strong> to the start of the line and started with ^ instead of ending with $ because ^ is the signal for the start of the line.

Now you can do things like this.

$dir = 'c:\temp'
if ($dir -notmatch '.+?\\$')
  {
    $dir += '\'
  }
$dir
#returns 'c:\temp\'

Here, I’m checking to see if the string ‘bears’ ends in a backslash, and if it doesn’t, I’m appending one.

Cool, right?

Read More

Quick Tip - Validate The Length Of An Integer

A little while ago, I fielded a question in the PowerShell Slack channel which was “How do I make sure a variable, which is an int, is of a certain length?”

Turns out it’s not too hard. You just need to use a little regex. Consider the following example.

[int]$v6 = 849032
[int]$v2 = 23
$v6 -match '^\d{6}$'
$v2 -match '^\d{6}$'

$v6 is an int that is six digits long. $v2 is an int that is only two inches long. On lines three and four, we’re testing to see if each variables match the pattern ’^\d{6}$’ which is regex speak for “start of the line, any digit, and six of them, end of the line”. The first one will be true, because it’s six digits, and the second one will be false. You could also use something like ’^\d{4,6}$’ to validate that the int is between four and six digits long.

Read More

PowerShell 10 Year Anniversary Code Golf Winners

For the PowerShell 10 Year Anniversary, Will Anderson (@GamerLivingWill on Twitter) and I (@MrThomasRayner on Twitter) ran a three-hole code golf competition on code-golf.com, a site developed by fellow MVP Adam Driscoll.

Here is the link to all the background info on the competition: https://github.com/ThmsRynr/PS10YearCodeGolf . Check this page out for links to the individual holes, too.

So, without further delay, let’s announce the winners!

Hole 1

The challenge was to get all the security updates installed on the local computer in the last 30 days and return the results in the form of a [Microsoft.Management.Infrastructure.CimInstance] object (or an array of them).

The winner of this hole is Simon Wåhlin. Here is their 46 character submission.

gcim(gcls *ix*|% *mC*e)|? I*n -gt((date)+-30d)

gcls *ix* gets the CimClass win32_quickfixengineering and % *mC*e gets the CimClassName property. gcim is an alias for Get-CimInstance which, as per the previous section, is getting the win32_quickfixengineering class. The results are piped into the where-object cmdlet where the property matching the pattern I*n (which happens to be InstalledOn) is greater than the current date, minus 30 days.

Hole 2

The challenge was to get the top ten file extensions in c:\windows\system32, only return 10 items and group results by extension.

The winner of this hole is Simon Wåhlin again. Here is their 42 character submission.

(ls C:\*\s*2\*.*|% E*n|group|sort c* -d)[0..9]

ls c:\*\s*2\*.* means Get-ChildItem where the path is c:<em><any directory>\<a directory matching s*2>\<files, not directories> </em>and this pattern only matches the path c:\windows\system32\<files>. This is piped into the foreach-object cmdlet to retrieve the property that matches the pattern E*n, which is the Extension property. The extensions are piped into the sort-object cmdlet, sorted by the property that matches the pattern c*, which is count, and returned in descending order. This is an array, and the items in positions 0-9 are returned.

There were shorter submissions for this hole that didn’t explicitly target c:\windows\system32 and therefore missed the challenge. You could not assume we were already on c: or running as admin, etc. Some solutions included folders in the results which also missed the challenge.

Hole 3

The challenge was to get all the active aliases that are fewer than three characters long and do not resolve to a Get- command. For this hole, even though it wasn’t in the Pester test, you had to assume that non-standard aliases might be on the system. That’s why we specifically mentioned that we didn’t want you to return aliases that resolve to Get-*, and the Pester test checked the ResolvedCommand.Name property of the aliases you returned.

To break some submissions that didn’t check what the aliases resolved to, you could just run New-Alias x Get-ChildItem to create a new alias of ‘x’ that resolves to Get-ChildItem.

The winner of this hole is EdijsPerkums. Here is their 24 character submission.

gal ?,??|? Di* -Notm Get

Get-Alias is passed an array of regex patterns, ?,?? which corresponds to one and two characters. The results are piped into the where-object cmdlet to isolate aliases whose property that matches the pattern Di* (DisplayName) doesn’t match Get.

Congratulations to all the winners! We will be in touch to get you your prizes. We hope you all had fun with this mini-competition. Don’t forget to check out all the terrific material from the PowerShell 10 Year Anniversary on Channel 9!

Read More

Quick Tip - Get All The Security Patches Installed On A Server Since A Specific Date

Recently, I needed to get a list of all the security patches I’d installed on a group of servers in the last year. It turns out that there’s a WMI class for this and it’s super easy to retrieve this info.

get-wmiobject win32_quickfixengineering -ComputerName $CompName | ? { $_.InstalledOn -gt (get-date).addyears(-1) }

In the win32_quickfixengineering class, you’ll find all the security patches installed on a system. One of the properties is the InstalledOn attribute which more recent than a year ago.

If you have a list of servers to do this for, this is still really easy.

$svrs = @"
server1
server2
server3
"@

$svrs.split("`n") | % { get-wmiobject win32_quickfixengineering -ComputerName $_.trim() | ? { $_.InstalledOn -lt (get-date).addyears(-1) } }

Just paste them into a here-string and execute this for each of them.

Read More

Using PowerShell To List All The Fonts In A Word Document

Recently I was challenged by a coworker to use PowerShell to list all the fonts in a Word document. It turned out to be easier than I thought it would be… but also slower than I thought it would be. Here’s what I came up with.

$Word = New-Object -ComObject Word.Application
$OpenDoc = $Word.Documents.open('c:\temp\test.docx')
$OpenDoc.words | % { $_ | select -ExpandProperty font } | select Name -Unique
$OpenDoc.close()
$Word.quit()

There could very well be a better way of doing this but this is what I came up with in a hurry. Line 1 declares a new instance of Word and line 2 opens the document we’re looking at. Then, for each word (which is handily a property of the open word document), we’re expanding the font property and selecting all the unique names of the fonts on line 3. Lines 4 and 5 close the document and quit Word.

So you can get something like this!

Get All The Fonts In A Word Document Via PowerShell

Read More

Quick Tip - Allow A Null Value For An Object That Doesn't Normally Allow It

In the PowerShell Slack channel (powershell.slack.com) a question came up along the lines of “I have a script that needs to pass a datetime object, but sometimes I’d like that datetime object to be null”. Never mind that maybe the script could be re-architected. Let’s solve this problem.

The issue is, if you try to assign a null value to a datetime object, you get an error.

[datetime]$null
Cannot convert null to type "System.DateTime".

The solution is super easy. Just make the thing nullable.

[nullable[datetime]]$null

This will return no output. So when you’re declaring the variable that will hold your datetime object, just make sure you make it nullable.

[nullable[datetime]]$date = $MaybeNullMaybeNot

Just for more proof this works as advertised, try this.

try { [datetime]$null; write-output 'worked!' } catch { write-output 'no worked!' }
no worked!

try { [nullable[datetime]]$null; write-output 'worked!' } catch { write-output 'no worked!' }
worked!

Cool!

Read More

Quick Tip - Copy The Output Of The Last PowerShell Command To Clipboard

I recently found myself poking around in PowerShell and going “oh, good now I want to copy and paste that output into an email/dialog box/tweet/notepad/another script/complaint box” and either trying to copy and paste it out of PowerShell or hitting the up arrow and piping whatever the last command was into Set-Clipboard. What a hassle.

So, I threw this small function into my profile.

function cc { r | scb }

You’ll need PowerShell 5.0 for this one (for Set-Clipboard). This just looks like gibberish though, what’s going on?

Well, clearly I’m defining a function named cc which is not a properly named PowerShell function but I’m being lazy. What does it do? Well it does r | scb.

r is an alias for Invoke-History which re-runs the last command you typed. Try it yourself.

PS G:\&gt; write-output "hah!"
hah!

PS G:\&gt; r
write-output "hah!"
hah!

PS G:\&gt; r
write-output "hah!"
hah!

scb is an alias for Set-Clipboard which means whatever came out of the last command will be the new contents of your clipboard.

The cool thing about this is it doesn’t just have to be text. Check out my other post about all the things Set-Clipboard can do.

Read More

Quick Tip - PowerShell Regex To Get Value Between Quotation Marks

If you’ve got a value like the following…

$s = @"
Here is: "Some data"
Here's "some other data"
this is "important" data
"@

… that maybe came from the body of a file, was returned by some other part of a script, etc., and you just want the portions that are actually between the quotes, the quickest and easiest way to get it is through a regular expression match.

That’s right, forget splitting or trimming or doing other weird string manipulation stuff. Just use the [regex]::matches() feature of PowerShell to get your values.

[regex]::matches($s,'(?&lt;=\").+?(?=\")').value

Matches takes two parameters. 1. The value to look for matches in, in this case the here-string in my $s variable, and 2. The regular expression to be used for matching. Since Matches returns a few items, we are making sure to just select the value for each match.

So what is that regex doing? Let’s break it down into it’s parts.

  • (?<=\") this part is a look behind as specified by the ?<= part. In this case, whatever we are matching will come right after a quote. Doing the look behind prevents the quotation mark itself from actually being part of the matched value. Notice I have to escape the quotation mark character.
  • .+? this part basically matches as many characters as it takes to get to whatever the next part of the regex is. Look into regex lazy mode vs greedy mode.
  • (?=\") this part is a look ahead as specified by the ?= part. We're looking ahead for a quotation mark because whatever comes after our match is done will be a quotation mark.

So basically what we’ve got is “whatever comes after a quotation mark, and as much of that as you need until you get to another quotation mark”. Easy, right? Don’t you love regex?

Read More

How To Send An Email Whenever A File Gets Changed

A little while ago, I fielded a question in the PowerShell Slack channel which was “How do I send an email automatically whenever a change is made to a specific file?”

Turns out it’s not too hard. You just need to set up a file watcher.

$watcher = New-Object System.IO.FileSystemWatcher
$watcher.Path = 'C:\temp\'
$watcher.Filter = 'test1.txt'
$watcher.EnableRaisingEvents = $true

$changed = Register-ObjectEvent $watcher 'Changed' -Action {
   write-output "Changed: $($eventArgs.FullPath)"
}

First, we create the watcher, which is just a FileSystemWatcher object. Technically the watcher watches the whole directory for changes (the path), which is why we add a filter.

Then we register an ObjectEvent, so that whenever the watcher sees a change event, it performs an action. In this case, I just have it writing output but it could easily be sending an email or performing some other task.

To get rid of the ObjectEvent, just run the following.

Unregister-Event $changed.Id

It’s just that easy!

Read More

Getting Started With Pester

If you don’t know what Pester is, it’s a framework for running unit tests and validating PowerShell code. Also, it’s awesome. In May I finally dipped my toe in the water with a pretty simple test for a REALLY simple function. I’m not going to go into a world of detail on how exactly all my Pester code works because there are tons of guides for that. What I’m going to do instead is provide a quick run down of what I came up with.

First things first, I need a function to validate.

function Write-SomeMath 
{
    param(
        [int]$First,
        [int]$Second
    )
    return $First + $Second
}

I guess that will work. Write-SomeMath takes two integers and returns their sum. Hardly a breathtaking display of complexity and function but it will do just fine for this example.

Now I need to install Pester. The easiest way to do this is using the PSGet module in PowerShell 5.0 to get it from PowerShellGallery.com.

Install-Module Pester -Scope CurrentUser -Force
Import-Module Pester

The next thing I need is a Describe block.

Describe 'GoofingWithPester.ps1' {

}

This Describe block will contain and - you guessed it - describe the tests (I just used my filename) and provide a unique TestDrive (check out the getting started link).

Now I need a Context block.

Describe 'GoofingWithPester.ps1' {
    Context 'Write-SomeMath' {
        
    }
}

I’m further grouping my tests by creating a Context here for my Write-SomeMath function. This could have been named anything.

Now, I could start with a bunch of tests, but I want to show off a particular feature of Pester that allows you to pass an array of different test cases.

Describe 'GoofingWithPester.ps1' {
    Context 'Write-SomeMath' {
        $testcases = @(
            @{
                fir  = 1
                sec  = 2
                exp  = 3
                test = '1 and 2'
            }, 
            @{
                fir  = 3
                sec  = 6
                exp  = 91 #wrong on purpose
                test = '3 and 6 (wrong on purpose)'
            }, 
            @{
                fir  = 4
                sec  = 6
                exp  = 10
                test = '4 and 6'
            }
        )

    }
}

All I did was define an array called $testcases which holds an array of hash tables. It’s got the first number, second number, expected result and a name of what we’re testing. Now I can pass this entire array to a test rather than crafting different tests for all of them individually.

Describe 'GoofingWithPester.ps1' {
    Context 'Write-SomeMath' {
        $testcases = @(
            @{
                fir  = 1
                sec  = 2
                exp  = 3
                test = '1 and 2'
            }, 
            @{
                fir  = 3
                sec  = 6
                exp  = 91 #wrong on purpose
                test = '3 and 6 (wrong on purpose)'
            }, 
            @{
                fir  = 4
                sec  = 6
                exp  = 10
                test = '4 and 6'
            }
        )
        It 'Can add &lt;test&gt;' -TestCases $testcases {
            param($fir,$sec,$exp)
            Write-SomeMath -First $fir -Second $sec | Should Be $exp
        }

    }
}

This is an It block which is what Pester calls a test. I’ve named it “Can add <test>” and it will pull the “test” value from the hashtable and fill it in. Cool! I’m using the -TestCases parameter to pass my array of test cases to the It block. Then I’ve got parameters inside the test for my first value, second value and expected outcome. I execute Write-SomeMath with the values pulled from my test cases and pipe the result to “Should Be” to compare the outcome to my expected outcome.

Now, just one more test for fun. What if I don’t pass an integer to my function?

Describe 'GoofingWithPester.ps1' {
    Context 'Write-SomeMath' {
        $testcases = @(
            @{
                fir  = 1
                sec  = 2
                exp  = 3
                test = '1 and 2'
            }, 
            @{
                fir  = 3
                sec  = 6
                exp  = 91 #wrong on purpose
                test = '3 and 6 (wrong on purpose)'
            }, 
            @{
                fir  = 4
                sec  = 6
                exp  = 10
                test = '4 and 6'
            }
        )
        It 'Can add &lt;test&gt;' -TestCases $testcases {
            param($fir,$sec,$exp)
            Write-SomeMath -First $fir -Second $sec | Should Be $exp
        }
        It 'Detects wrong datatypes' {
            {Write-SomeMath -First 9 -Second 'cat'} | Should throw
        }
    }
}

Another It block for detecting wrong datatypes. I pipe the result into Should throw because my function should throw an error. For this to work properly, the code I’m testing has to be wrapped in a scriptblock, otherwise the thrown error will occur and be trapped in my function.

Here’s the outcome when I run this file!

[caption id=”attachment_355” align=”alignnone” width=”838”]Getting Started With Pester - Results Pester Results[/caption]

Pretty cool. My first test passes, the second one fails and tells me why, the third and fourth tests pass. The fourth one is especially interesting. The function FAILED but because the test said it SHOULD FAIL, the test itself passed.

So that’s my “dip my toes in the water” intro Pester test. Stay tuned for more complicated examples.

Read More

Using PowerShell To Add Groups To "AcceptMessagesOnlyFromDLMembers" Exchange Attribute

Here’s a bit of an obscure task. In Exchange you can configure the AcceptMessagesOnlyFromDLMembers attribute which does what it sounds like it does: it only allows the mail recipient to accept messages from members of specific distribution lists. The problem is, there’s no built in method for appending a distribution list (DL) to an existing list of DLs. If you set AcceptMessagesOnlyFromDLMembers equal to a value, it overwrites what was there before. So, I wrote a quick script to append a value instead of overwriting it. You’ll need a remote Exchange Management Shell and the AD management module for this.

function Add-AcceptMessagesOnlyFromDLMembers 
{
    [CmdletBinding()]
    param (
        [Parameter(Mandatory)]
        [string]$AppendTo,
        [Parameter(Mandatory)]
        [string]$DLName
    )
    
    $arr = $(Get-MailContact $AppendTo | Select-Object AcceptMessagesOnlyFromDLMembers).AcceptMessagesOnlyFromDLMembers
    $arr += ($(Get-ADGroup $NameOfGroup -Properties CanonicalName).CanonicalName)
    set-mailContact $AppendTo -AcceptMessagesOnlyFromDLMembers:"$($arr)"
}

First things first, I declare the function named Add-AcceptMessagesOnlyFromDLMembers which is a bit more verbose than I’d usually like to make it, but I’m also a fan of descriptive function and cmdlet names.

Second, I need some parameters. The mail recipient whose AcceptMessagesOnlyFromDLMembers value we’re appending something to, and the DL that we’re appending.

Line 11 is where we begin doing the real work. I’ve got to get the mail contact and select just the value currently in AcceptMessagesOnlyFromDLMembers so I can append something to it. I store that data in $arr.

On line 12, I’m retrieving the CanonicalName attribute for the DL I want to append to the list of DLs that can send mail to this contact. The AcceptMessagesOnlyFromDLMembers attribute is a bit weird in that it only appears to take Canonical Names, not Distinguished names, etc.. I’m appending that value to the end of $arr.

Line 13 is pretty straight forward. I’m setting the AcceptMessagesOnlyFromDLMembers attribute to the value of $arr determined in line 12.

That’s it! If this is a task you perform regularly, please take this script and apply it. If you make it more robust, I’d love to see what your modifications are.

Read More

Easily Restore A Deleted Active Directory User

If you have a modern version of Active Directory, you have the opportunity to enable the Active Directory Recycle Bin. Once enabled, you have a chance to recover a deleted item once it has been removed from Active Directory.

Here’s a quick and easy script to recover a user based on their username.

$dn = (Get-ADObject -SearchBase (get-addomain).deletedobjectscontainer -IncludeDeletedObjects -filter "samaccountname -eq '$Username'").distinguishedname
Restore-ADObject -identity $dn

On the first line, we’re getting the DistinguishedName for the deleted user. The DN changes when a user gets deleted because it’s in the Recycle Bin now. Where’s your deleted objects container? Well it’s easily found with the (Get-ADDomain).DeletedObjectsContainer part of line 1. All we’re doing is searching for AD objects in the deleted objects container whose username matches the one we’re looking for. We need to make sure the -IncludeDeletedObjects flag is set or nothing that’s deleted will be returned.

On the second line, we’re just using the Restore-ADObject cmdlet to restore the object at the DN we found above.

Read More

Quick Script Share - Adding A Bunch Of Random Test Users To Active Directory

I recently had a need to add a bunch of random users to a specific OU in Active Directory to do some testing. I didn’t care what their names were, but, I wanted to be able to find all the users that belonged to each batch. Here’s the script I wrote to do this.

#requires -Version 2 -Modules ActiveDirectory
&lt;#
    .Synopsis
    Adds a bunch of dummy users to Active Directory.
    .Description
    Prompts a user for an OU in their AD and makes a bunch of random users in that OU.
    .Example
    New-RandomADUsers.ps1
    Runs the script with default paramaters, will prompt for an OU
    .Example
    New-RandomADUsers.ps1 -OU 'OU=testing,OU=workingsysadmin,DC=lab,DC=workingsysadmin,DC=com' -Count 10
    Creates 10 random users in the specified OU
    .Parameter OU
    The OU to create random users in. Takes a distinguished name.
    .Parameter Count
    How many new users to create. Defaults to 10.
    .Parameter Password
    The password to assign to all the created users. Defaults to 'P@ssw0rd'.
    .Notes
    NAME:  New-RandomADUsers.ps1
    AUTHOR: Thomas Rayner
    LASTEDIT: 04/15/2016
    KEYWORDS:
    .Link
    https://thomasrayner.ca
#&gt;

[CmdletBinding()]
param
(
  [String]
  [Parameter(Position=0)]
  $OU,

  [int]
  [Parameter(Position=1)]
  $Count = 10,

  [string]
  [Parameter(Position=2)]
  $Password = 'P@ssw0rd'
)


#region FUNCTIONS
##########################START OF FUNCTIONS##########################

#Show a GUI to select an OU
function Select-GUIOU
{
    #load required assemblies
    [void] [System.Reflection.Assembly]::LoadWithPartialName('System.Windows.Forms')
    [void] [System.Reflection.Assembly]::LoadWithPartialName('System.Drawing') 

    #build the form
    $objForm = New-Object System.Windows.Forms.Form 
    $objForm.Text = 'Select an OU'
    $objForm.Size = New-Object System.Drawing.Size(900,430) 
    $objForm.StartPosition = 'CenterScreen'

    #add the OK button
    $OKButton = New-Object System.Windows.Forms.Button
    $OKButton.Location = New-Object System.Drawing.Size(75,350)
    $OKButton.Size = New-Object System.Drawing.Size(75,23)
    $OKButton.Text = 'OK'

    #assign the OK button to the AcceptButton param
    $OKButton.DialogResult = [System.Windows.Forms.DialogResult]::OK
    $objForm.Controls.Add($OKButton)
    $objForm.AcceptButton = $OKButton

    #add the cancel button
    $CancelButton = New-Object System.Windows.Forms.Button
    $CancelButton.Location = New-Object System.Drawing.Size(150,350)
    $CancelButton.Size = New-Object System.Drawing.Size(75,23)
    $CancelButton.Text = 'Cancel'

    #assign the cancel button to the CancelButton param
    $CancelButton.DialogResult = [System.Windows.Forms.DialogResult]::Cancel
    $objForm.Controls.Add($CancelButton)
    $objForm.CancelButton = $CancelButton

    #add the Select OU label
    $objLabel = New-Object System.Windows.Forms.Label
    $objLabel.Location = New-Object System.Drawing.Size(10,20) 
    $objLabel.Size = New-Object System.Drawing.Size(400,20) 
    $objLabel.Text = "Please select an OU. Don't see one you want? Cancel and create it."
    $objForm.Controls.Add($objLabel) 

    #add the listbox to select an OU from
    $objListBox = New-Object System.Windows.Forms.ListBox 
    $objListBox.Location = New-Object System.Drawing.Size(10,40) 
    $objListBox.Size = New-Object System.Drawing.Size(860,90) 
    $objListBox.Height = 300

    #get all the OUs in the organization, sort them alphabetically and add them to the listbox
    $OUs = Get-ADOrganizationalUnit -filter '*'
    $OUs | Sort-Object | % { [void] $objListBox.Items.Add($_) }

    #add the listbox to the form
    $objForm.Controls.Add($objListBox) 

    #open this window on top of other windows
    $objForm.TopMost = $True

    #return the OU selected
    $result = $objForm.ShowDialog()

    #if the user clicks OK and selected an OU, return it
    if ($result -eq [System.Windows.Forms.DialogResult]::OK -and $objListBox.SelectedIndex -ge 0)
    {
        $selection = $objListBox.SelectedItem
        return $selection
    }
    else
    {
        throw { 'Did not select an OU. Script terminated.' }
    }
}

#Get the OU to create users in
function Get-OU
{
    [CmdletBinding()]
    param
    (
      [String]
      [Parameter(Position=0)]
      $OU
    )

    #if there was no OU passed, select one using the GUI
    if ([string]::IsNullOrEmpty($OU))
    {
        return $(Select-GUIOU).DistinguishedName
    }

    #if there was an OU passed, validate it exists
    else
    {
        #try to get the OU, if this is successful, return the value we used to find it (should be a DN)
        try
        {
            $TestOU = Get-ADOrganizationalUnit -Identity $OU
            return $TestOU.DistinguishedName
        }

        #if we couldn't find an OU with the name specified, use the GUI to find a new one
        catch [Exception]
        {
            Write-Output "[Oops] Couldn't find an OU that matched $OU so I made you pick again"
            return $(Select-GUIOU).DistinguishedName
        }
    }
}

###########################END OF FUNCTIONS###########################
#endregion



#validate the OU is set to something valid, communicate valid OU to user
$OU = Get-OU $OU
Write-Output "Selected OU: $OU"

#don't need to validate Count because the param block that leads this script will throw an error if Count isn't an integer... 
#... assuming users are smart enough to enter a positive number (for loop below will throw an index error and nothing bad should happen)
Write-Output "Selected Count: $Count"

#create array of all upper and lowercase letters (char code representative)
$Upper = (65..90)
$Lower = (97..122)

#create an ID for this batch of created users
$ID = (Get-Date).Ticks

#create $count many users
$null = for ($i = 0; $i -lt $Count; $i++)
{
    #make a random first name, first initial capitalized
    $Initial = Get-Random -InputObject $Upper -Count 1 | % { [char]$_ }
    $RestOfname = Get-Random -InputObject $Lower -Count 4 | % { [char]$_ }
    $FirstName = $Initial + $RestOfName -replace ' ',''
    
    #make a random last name, first initial capitalized
    $Initial = Get-Random -InputObject $Upper -Count 1 | % { [char]$_ }
    $RestOfname = Get-Random -InputObject $Lower -Count 4 | % { [char]$_ }
    $LastName = $Initial + $RestOfName -replace ' ',''

    #craft the displayname, name, samaccountname, UPN attributes
    #in the future, this will detect name collisions and mitigate
    $DisplayName = "$FirstName $LastName"
    $Name = $DisplayName
    $SamAccountName = "$($FirstName[0])$LastName"           #first initial last name, IE: Thomas Rayner becomes TRayner
    $UPN = $SamAccountName + '@' + (Get-ADDomain).DNSRoot   #UPN suffix won't always be the value of (Get-ADDomain).DNSRoot but it usually is... this is why you read scripts before running them

    #assign the password
    $AccountPassword = ConvertTo-SecureString -AsPlainText $Password -Force

    #create description that can be used to group users created in different batches
    $Description = "Created by MVP Tool: New-RandomADUsers.ps1. Batch: $ID"

    #announce the creation of a user
    Write-Output "Creating $Name. Username: $SamAccountName"

    #add all the params to the user list
    $UserParams = @{}
    $UserParams.Add('GivenName',$FirstName)
    $UserParams.Add('Surname',$LastName)
    $UserParams.Add('DisplayName',$DisplayName)
    $UserParams.Add('Name',$Name)
    $UserParams.Add('SamAccountName',$SamAccountName)
    $UserParams.Add('AccountPassword',$AccountPassword)
    $UserParams.Add('Description',$Description)
    $UserParams.Add('Path',$OU)                           #put them all in the OU we created
    $UserParams.Add('Enabled',$True)                      #enable the account

    #create the AD user
    New-ADUser @UserParams
}

#list the users created, verify they can be found in AD
$CreatedUsers = (Get-ADUser -SearchBase $OU -Filter "Description -like '*$ID*'").SamAccountName
Write-Output 'Created following users'
$CreatedUsers

 

Read More

Happy Birthday To Me!

Today is my birthday and so I don’t feel like doing a whole ton of work. I do, however, feel like celebrating. Obviously that means singing Happy Birthday. That should be a pretty easy PowerShell task. In fact, it’s made even easier by the fact that fellow Microsoft MVP Trevor Sullivan already wrote and shared a script to do it. Here it is on the Microsoft Script Gallery: https://gallery.technet.microsoft.com/A-PowerShell-Happy-983c1253.

He’s got an array of hash tables which each consist of a pitch and a length. The [System.Console]::Beep() method just so happens to take a pitch and length parameter. Predictably, this method makes the computer speaker beep. Even if you don’t have speakers, this should still work. All the pitches and lengths correspond to the pitch of a beep and how long it should last.

Read More

Quick Tip - Detecting Special Characters In A String The Easy Way

Here’s a super easy way to detect special characters in a string. Consider the following.

$string1 = 'something'
$string2 = 'some@thing'

$string1 -eq $($string1 -replace '[^a-zA-Z]','')
$string2 -eq $($string2 -replace '[^a-zA-Z]','')

String1 has no special characters, String2 does. All I’m doing is comparing the string to “the string if we replace everything that isn’t a regular letter” using the -replace operator.

It’s just that easy.

You could do the same thing with the -match operator, too. The point here is looking at the regex.

$string -match '[^a-zA-Z]'
Read More

Getting Large Exchange Mailbox Folders With PowerShell

I’ve been continuing my quest to identify users who have large Exchange mailboxes. I wrote a function in my last post to find large Exchange mailboxes, but, I wanted to take this a step further and identify the large folders within user mailboxes that could stand to be cleaned out. For instance, maybe I want to find all the users who have a large Deleted Items folder or Sent Items or Calendar. You get the idea. It’s made to be run from a Remote Exchange Management Shell connection instead of by logging into an Exchange server via remote desktop and running such a shell manually. Remote administration is the future (just like my last post)!

So, let’s define the function and parameters.

function Get-LargeFolder 
{
    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $False)]
        [ValidateSet('All', 'Calendar', 'Contacts', 'ConversationHistory', 'DeletedItems', 'Drafts', 'Inbox', 'JunkEmail', 'Journal', 'LegacyArchiveJournals', 'ManagedCustomFolder', 'NonIpmRoot', 'Notes', 'Outbox', 'Personal', 'RecoverableItems', 'RssSubscriptions', 'SentItems', 'SyncIssues', 'Tasks')]
        [string]$FolderScope = 'All',
        [Parameter(Mandatory = $False)]
        [int]$Top = 1,
        [Parameter(Mandatory = $False,
                Position = 1,
        ValueFromPipeline = $True)]
        [string]$Identity = '*'
    )
}

My function is going to be named Get-LargeFolder and takes three parameters. $FolderScope is used in the Get-MailboxFolderStatistics cmdlet (spoiler alert) and must belong to the set of values specified. $Top is an integer used to define how many results we’re going to return and $Identity can be specified as an individual username to examine a specific mailbox, or left blank (defaulted to *) to examine the entire organization.

function Get-LargeFolder 
{
    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $False)]
        [ValidateSet('All', 'Calendar', 'Contacts', 'ConversationHistory', 'DeletedItems', 'Drafts', 'Inbox', 'JunkEmail', 'Journal', 'LegacyArchiveJournals', 'ManagedCustomFolder', 'NonIpmRoot', 'Notes', 'Outbox', 'Personal', 'RecoverableItems', 'RssSubscriptions', 'SentItems', 'SyncIssues', 'Tasks')]
        [string]$FolderScope = 'All',
        [Parameter(Mandatory = $False)]
        [int]$Top = 1,
        [Parameter(Mandatory = $False,
                Position = 1,
        ValueFromPipeline = $True)]
        [string]$Identity = '*'
    )

    Get-Mailbox -Identity $Identity -ResultSize Unlimited |
    Get-MailboxFolderStatistics -FolderScope $FolderScope 
}

Now I’ve added a couple lines to get all the mailboxes in my organization (or a specific user’s mailbox) which I pipe into a Get-MailboxFolderStatistics command with the FolderScope parameter set to the same value we passed to our function. Now we need to sort the results, but, see my last post for why that’s going to be complicated.

function Get-LargeFolder 
{
    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $False)]
        [ValidateSet('All', 'Calendar', 'Contacts', 'ConversationHistory', 'DeletedItems', 'Drafts', 'Inbox', 'JunkEmail', 'Journal', 'LegacyArchiveJournals', 'ManagedCustomFolder', 'NonIpmRoot', 'Notes', 'Outbox', 'Personal', 'RecoverableItems', 'RssSubscriptions', 'SentItems', 'SyncIssues', 'Tasks')]
        [string]$FolderScope = 'All',
        [Parameter(Mandatory = $False)]
        [int]$Top = 1,
        [Parameter(Mandatory = $False,
                Position = 1,
        ValueFromPipeline = $True)]
        [string]$Identity = '*'
    )

    Get-Mailbox -Identity $Identity -ResultSize Unlimited |
    Get-MailboxFolderStatistics -FolderScope $FolderScope |
    Sort-Object -Property @{
        e = {
            $_.FolderSize.split('(').split(' ')[-2].replace(',','') -as [double]
        }
    } -Descending 
}

The FolderSize parameter that comes back with a Get-MailboxFolderStatistics cmdlet is a string which I’m splitting up in order to get back only the value in bytes which I am casting to a double. Now that we have gathered our stats and put them in order, I just need to select them so they may be returned. Here is the complete script.

function Get-LargeFolder 
{
    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $False)]
        [ValidateSet('All', 'Calendar', 'Contacts', 'ConversationHistory', 'DeletedItems', 'Drafts', 'Inbox', 'JunkEmail', 'Journal', 'LegacyArchiveJournals', 'ManagedCustomFolder', 'NonIpmRoot', 'Notes', 'Outbox', 'Personal', 'RecoverableItems', 'RssSubscriptions', 'SentItems', 'SyncIssues', 'Tasks')]
        [string]$FolderScope = 'All',
        [Parameter(Mandatory = $False)]
        [int]$Top = 1,
        [Parameter(Mandatory = $False,
                Position = 1,
        ValueFromPipeline = $True)]
        [string]$Identity = '*'
    )

    Get-Mailbox -Identity $Identity -ResultSize Unlimited |
    Get-MailboxFolderStatistics -FolderScope $FolderScope |
    Sort-Object -Property @{
        e = {
            $_.FolderSize.split('(').split(' ')[-2].replace(',','') -as [double]
        }
    } -Descending |
    Select-Object -Property @{
        l = 'NameFolder'
        e = {
            $_.Identity.Split('/')[-1]
        }
    }, 
    @{
        l = 'FolderSize'
        e = {
            $_.FolderSize.split('(').split(' ')[-2].replace(',', '') -as [double]
        }
    } -First $Top
}

Now you can do this.

#Get 25 largest Deleted Items folders in your organization
Get-LargeFolder -FolderScope 'DeletedItems' -Top 25

#Get my largest 10 folders
Get-LargeFolder -Identity ThmsRynr -Top 10

#Get the top 25 largest Deleted Items folder for users in a specific group
$arrLargeDelFolders = @()
(Get-ADGroupMember 'GroupName' -Recursive).SamAccountName | ForEach-Object -Process {
    $arrLargeDelFolders += Get-LargeFolder -FolderScope 'DeletedItems' -Identity $_ 
}
$arrLargeDelFolders |
Sort-Object -Property FolderSize -Descending |
Select-Object -Property NameFolder, @{
    l = 'FolderSize (Deleted Items)'
    e = {
        '{0:N0}' -f $_.FolderSize
    }
} -First 25 |
Format-Table -AutoSize

 

Read More

Getting Your Organizations Largest Exchange Mailboxes With PowerShell

In a quest to hunt down users with large mailboxes, I wrote the following PowerShell function. It’s made to be run from a Remote Exchange Management Shell connection instead of by logging into an Exchange server via remote desktop and running such a shell manually. Remote administration is the future!

My requirements were rather basic. I wanted a function that would return the top 25 (or another number of my choosing) Exchange mailboxes in my organization by total size. I also wanted the ability to specify an individual user’s mailbox to see how large the specific box is.

So, let’s get started.

function Get-LargeMailbox
{
    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $False)]
        [int]$Top = 1,
        [Parameter(Mandatory = $False,
                Position = 1,
        ValueFromPipeline = $True)]
        [string]$Identity = '*'
    )
}

All I’ve done here is declare my new function named Get-LargeMailbox and specified its parameters. $Top is the integer representing the number of mailboxes to return (defaulted to 1) and $Identity is the specific mailbox we want to return (defaulted to * which will return all mailboxes).

Now, I know I need to get my mailboxes and retrieve some statistics.

function Get-LargeMailbox
{
    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $False)]
        [int]$Top = 1,
        [Parameter(Mandatory = $False,
                Position = 1,
        ValueFromPipeline = $True)]
        [string]$Identity = '*'
    )
        
    Get-Mailbox -Identity $Identity -ResultSize Unlimited |
    Get-MailboxStatistics 
}

So far, so good. We haven’t narrowed down the stats we care about yet but we’re getting all the mailboxes in the organization and retrieving all the stats for them. Now we’re about to run into a problem. There’s a property returned by Get-MailboxStatistics called TotalItemSize but when you’re in a remote session, but, it’s hard to work with. Observe.

PS C:\&gt; (Get-Mailbox ThmsRynr | Get-MailboxStatistics).TotalItemSize | Format-List


IsUnlimited : False
Value       : 2.303 GB (2,473,094,022 bytes)

You can see it returns a property consisting of a boolean value for if my quota is unlimited, and then a value of what my total size is. Ok, so that value is probably a number, right?

PS C:\&gt; (Get-Mailbox ThmsRynr| Get-MailboxStatistics).TotalItemSize.Value | Get-Member


   TypeName: Deserialized.Microsoft.Exchange.Data.ByteQuantifiedSize
#output omitted

Well, yeah, it is. The Value of TotalItemSize is a number but it’s a Deserialized.Microsoft.Exchange.Data.ByteQuantifiedSize and when you’re connected to a remote Exchange Management Shell, you don’t have that library loaded unless you install some tools on your workstation. Rather than do that, can’t we just fool around with it a bit and avoid installing a bunch of superfluous Exchange management tools? I bet we can, especially since this value has a ToString() method associated with it.

Back to our function. I need to sort the results of my “Get all the mailboxes, get all their stats” command by the total size of the mailboxes.

function Get-LargeMailbox
{
    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $False)]
        [int]$Top = 1,
        [Parameter(Mandatory = $False,
                Position = 1,
        ValueFromPipeline = $True)]
        [string]$Identity = '*'
    )
        
    Get-Mailbox -Identity $Identity -ResultSize Unlimited |
    Get-MailboxStatistics |
    Sort-Object -Property @{
        e = {
            $_.TotalItemSize.Value.ToString().split('(').split(' ')[-2].replace(',', '') -as [double]
        }
    } -Descending 
}

Oh boy, string manipulation is always fun, isn’t it? What I’ve done here is sorted my mailboxes by an expression. That expression is the result of converting the value of the TotalItemSize attribute to a string and manipulating it. I’m splitting it on the open bracket character, and then again on the space character. I’m taking the second last item in that array, stripping out the commas and casting it as a double (because some values are too big to be integers). That’s a lot of weird string manipulation for some of you to get your heads around, but look at the string returned by default. I need the number of bytes and that was the best way to get it.

Now all I need to do is select the properties from my sorted list of mailboxes and return the top number of results. Here’s the final function.

function Get-LargeMailbox
{
    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $False)]
        [int]$Top = 1,
        [Parameter(Mandatory = $False,
                Position = 1,
        ValueFromPipeline = $True)]
        [string]$Identity = '*'
    )
        
    Get-Mailbox -Identity $Identity -ResultSize Unlimited |
    Get-MailboxStatistics |
    Sort-Object -Property @{
        e = {
            $_.TotalItemSize.Value.ToString().split('(').split(' ')[-2].replace(',', '') -as [double]
        }
    } -Descending |
    Select-Object -Property DisplayName, @{
        l = 'MailboxSize'
        e = {
            $_.TotalItemSize.Value.ToString().split('(').split(' ')[-2].replace(',', '') -as [double]
        }
    } -First $Top
}

Now you can do things like this.

#See how big my individual mailbox is
Get-LargeMailbox -Identity ThmsRynr

#Get the largest 20 mailboxes in the organization
Get-LargeMailbox -Top 20

#Get the mailboxes for a specific AD group and sort by size
$arrLargeMailboxes = @()
(Get-ADGroupMember 'GroupName' -Recursive).SamAccountName | ForEach-Object -Process {
    $arrLargeMailboxes += Get-LargeMailbox -Identity $_ 
}
$arrLargeMailboxes |
Sort-Object -Property MailboxSize -Descending |
Select-Object -Property DisplayName, @{
    l = 'MailboxSize'
    e = {
        '{0:N0}' -f $_.MailboxSize
    }
} |
Format-Table -AutoSize

Before we end, let’s take a closer look at the last example.

First, I’m declaring an array to hold the results of users and how large their mailbox is. Then I’m getting all the members of a group, taking the SamAccountName and performing an action on each of them. That action, of course, is retrieving their mailbox size using the function I just wrote and appending the results to the array. Then I need to sort that array and display it. The Select-Object command has the formatting I included to make the mailbox sizes have commas separating every three digits.

Read More

Quick Script Share - Upgrade Windows Certificate Authority from CSP to KSP and from SHA-1 to SHA-256

I recently had the chance to work with Microsoft PFE, Mike MacGillivray, on an upgrade of some Windows Certificate Authorities and want to share the upgrade script with you. Here it is, without commentary. Details and explanation are currently forthcoming.

#requires -Version 2
#requires -RunAsAdministrator
$OldEAP = $ErrorActionPreference
$ErrorActionPreference = 'stop'

Function Add-LogEntry
{
    [CmdletBinding()] 
    Param( 
        [Parameter(Position = 0, 
                Mandatory = $True, 
        ValueFromPipeline = $True)] 
        [string]$LogLocation, 
        [Parameter(Position = 1, 
                Mandatory = $True, 
        ValueFromPipeline = $True)] 
        [string]$LogMessage 
    )
    $LogThis = "$(Get-Date -Format 'MM/dd/yyyy hh:mm:ss'): $LogMessage"
    $LogThis | Out-File -FilePath $LogLocation -Append
    write-output $LogThis
}

Write-Output -InputObject @"
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:: This script will migrate CA keys from CSP to KSP and set up SHA256 for cert signing.
:: 
:: It will only work on Windows Server 2012 or 2012 R2 where the CA is configured with CSP.
:: (It won't work on Server 2008 R2)
::
:: Use CTRL+C to kill
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 

"@

#region Stage 1 - Set Variables
$Password = Read-Host -Prompt 'Set password for key backup (not stored in script as securestring)'

$Drivename = Read-Host -Prompt 'Set drive letter including colon [C:]'
if ([string]::IsNullOrWhiteSpace($Drivename)) 
{
    $Drivename = 'C:' 
}

$Foldername = Read-Host -Prompt "Set folder name [CA-KSPMigration_$($env:computername)]"
if ([string]::IsNullOrWhiteSpace($Foldername)) 
{
    $Foldername = "CA-KSPMigration_$($env:computername)" 
}

if (Test-Path -Path "$Drivename\$Foldername") 
{
    Remove-Item -Path "$Drivename\$Foldername" -Recurse -Force 
}
New-Item -ItemType Directory -Path "$Drivename\$Foldername"

$CAName = cmd.exe /c 'certutil.exe -cainfo name'
$CAName = $CAName[0].split(' ')[-1]

$Logpath = Read-Host -Prompt "Set log path [$($Drivename)\$($Foldername)\log.txt]"
if ([string]::IsNullOrWhiteSpace($Logpath)) 
{
    $Logpath = "$($Drivename)\$($Foldername)\log.txt" 
}

Add-LogEntry $Logpath 'Variables configured'
Add-LogEntry $Logpath "Password: $($Password)"
Add-LogEntry $Logpath "Drivename: $($Drivename)"
Add-LogEntry $Logpath "Foldername: $($Foldername)"
Add-LogEntry $Logpath "CAName: $($CAName)"
Add-LogEntry $Logpath "Logpath: $($Logpath)"
#endregion

#region Stage 2 - Backup Existing CA
try
{
    Add-LogEntry $Logpath 'Performing full CA backup'

    cmd.exe /c "certutil -p $($Password) -backup $("$Drivename\$Foldername")"
    Add-LogEntry $Logpath 'Saved CA database and cert'

    cmd.exe /c "reg export hklm\system\currentcontrolset\services\certsvc\configuration $("$Drivename\$Foldername")\CA_Registry_Settings.reg /y"
    Add-LogEntry $Logpath 'Saved reg keys'

    Copy-Item -Path 'C:\Windows\System32\certsrv\certenroll\*.crl' -Destination "$Drivename\$Foldername"
    Add-LogEntry $Logpath 'Copied CRL files'

    cmd.exe /c 'certutil -catemplates' | Out-File -FilePath "$Drivename\$Foldername\Published_templates.txt"
    Add-LogEntry $Logpath 'Got list of published cert templates'
    
    Add-LogEntry $Logpath 'Finished full CA backup'
}
catch [Exception]
{
    Add-LogEntry $Logpath "*** Activity failed - Exception Message: $($_.Exception.Message)"
    Exit-PSHostProcess
}
#endregion

#region Stage 3 - Delete existing certs and keys
try
{
    Stop-Service -Name 'certsvc'
    Add-LogEntry $Logpath 'CA service stopped'
    
    $CertSerial = cmd.exe /c "certutil -store My $("$CAName")" | Where-Object -FilterScript {
        $_ -match 'hash' 
    }
    $CertSerial | Out-File -FilePath "$Drivename\$Foldername\CA_Certificates.txt"
    Add-LogEntry $Logpath 'Got CA cert serials'
    
    $CertProvider = cmd.exe /c "certutil -store My $("$CAName")" | Where-Object -FilterScript {
        $_ -match 'provider' 
    }
    $CertProvider | Out-File -FilePath "$Drivename\$Foldername\CSP.txt"
    Add-LogEntry $Logpath 'Got CA CSPs'
    
    $CertSerial | ForEach-Object -Process {
        cmd.exe /c "certutil -delstore My `"$($_.Split(':')[-1].trim(' '))`"" 
    }
    Add-LogEntry $Logpath 'Deleted CA certificates'
    
    $CertProvider | ForEach-Object -Process {
        cmd.exe /c "certutil -CSP `"$($_.Split('=')[-1].trim(' '))`" -delkey $("$CAName")" 
    }
    Add-LogEntry $Logpath 'Deleted CA private keys'
}
catch [Exception]
{
    Add-LogEntry $Logpath "*** Activity failed - Exception Message: $($_.Exception.Message)"
    Exit-PSHostProcess
}
#endregion

#region Stage 4 - Import keys in KSP and restore to CA
try
{
    cmd.exe /c "certutil -p $Password -csp `"Microsoft Software Key Storage Provider`" -importpfx `"$("$Drivename\$Foldername\$CAName.p12")`""
    Add-LogEntry $Logpath 'Imported CA cert and keys into KSP'
    
    cmd.exe /c "certutil -exportpfx -p $Password My $("$CAName") `"$("$Drivename\$Foldername\NewCAKeys.p12")`""
    Add-LogEntry $Logpath 'Exported keys so they can be installed on the CA'
    
    cmd.exe /c "certutil -p $Password -restorekey `"$("$Drivename\$Foldername\NewCAKeys.p12")`""
    Add-LogEntry $Logpath 'Restored keys into CA'
}
catch [Exception]
{
    Add-LogEntry $Logpath "*** Activity failed - Exception Message: $($_.Exception.Message)"
    Exit-PSHostProcess
}
#endregion

#region Stage 5 - Create and import required registry settings
try
{
    $CSPreg = @"
    Windows Registry Editor Version 5.00
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration\$CAName\CSP]
    "CNGHashAlgorithm"="SHA256"
    "CNGPublicKeyAlgorithm"="RSA"
    "HashAlgorithm"=dword:ffffffff
    "MachineKeyset"=dword:00000001
    "Provider"="Microsoft Software Key Storage Provider"
    "ProviderType"=dword:00000000
"@
    $CSPreg | Out-File -FilePath "$Drivename\$Foldername\csp.reg"
    Add-LogEntry $Logpath 'Created csp.reg'
    
    $Encryptionreg = @"
    Windows Registry Editor Version 5.00
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration\$CAName\EncryptionCSP]
    "CNGEncryptionAlgorithm"="3DES"
    "CNGPublicKeyAlgorithm"="RSA"
    "EncryptionAlgorithm"=dword:6603
    "MachineKeyset"=dword:00000001
    "Provider"="Microsoft Software Key Storage Provider"
    "ProviderType"=dword:00000000
    "SymmetricKeySize"=dword:000000a8
"@
    $Encryptionreg | Out-File -FilePath "$Drivename\$Foldername\encryption.reg"
    Add-LogEntry $Logpath 'Created encryption.reg'
}
catch [Exception]
{
    Add-LogEntry $Logpath "*** Activity failed - Exception Message: $($_.Exception.Message)"
    Exit-PSHostProcess
}

$ErrorActionPreference = 'SilentlyContinue'

cmd.exe /c "reg import $("$Drivename\$Foldername\encryption.reg")"
Add-LogEntry $Logpath 'Imported encryption.reg'

cmd.exe /c "reg import $("$Drivename\$Foldername\csp.reg")"
Add-LogEntry $Logpath 'Imported csp.reg'

Start-Service -Name 'certsvc'
Add-LogEntry $Logpath 'Started certsvc'


#endregion

$ErrorActionPreference = $OldEAP

 

Read More

Get All The Members Of The Distributions Lists That A User Is A Member Of

This is kind of a weird script tip but I bumped into a need for this kind of script so I thought I’d share it. In this post, I have a user and I want to get all the members of all the distribution lists that the user is a member of. That is to say, if the user is a member of DL1, DL2 and DL3 distribution lists, I want to get all the other members of all those distribution lists. You’re going to need a remote Exchange shell for this.

Here’s the code I came up with.

$DN = 'CN=ThmsRynr,OU=BestUsers,DC=lab,DC=workingsysadmin,DC=com'
Get-DistributionGroup -filter "members -eq '$DN'" | Select-Object Name,@{l='Members';e={(Get-DistributionGroupMember $_.SamAccountName -ResultSize Unlimited | % { $_.Name }) -join '; ' }}

Line 1 is just declaring a variable to hold the DistinguishedName attribute for the user I am interested in. Line 2 is the work line. The first thing I’m doing is getting all the distribution groups which have a member equal to the DN of the user I’m interested in. Now, the weirdness happens…

When you do a Get-DistributionGroup, you do not get the members of that group back with it. Here are the properties that come back that contain the string “mem” in the name.

PS C:\&gt; Get-DistributionGroup -filter "members -eq '$DN'" | Select-Object -First 1 | Get-Member | Where-Object Name -match 'mem' | Select-Object Name

Name
----
AcceptMessagesOnlyFromDLMembers
AcceptMessagesOnlyFromSendersOrMembers
AddressListMembership
BypassModerationFromSendersOrMembers
MemberDepartRestriction
MemberJoinRestriction
RejectMessagesFromDLMembers
RejectMessagesFromSendersOrMembers

Nothing in there contains the members. So back to the command I wrote to accomplish my goal.

I’m piping the Distribution Groups returned into a Select-Object cmdlet to return the Name property and then a custom column. The label is Members and the content is going to just be a string of all the Distribution Group members’ names separated by semicolons. The expression for my custom column is a Get-DistributionGroupMember command for the Distribution Group piped into a Foreach-Object (alias is “%”) which returns an array of all the names of the members in the Distribution Group. I use the -join command to take the array and convert it into a string separated by semicolons.

It’s just that easy!

Read More

Quick Script Share - Get-RandomPW - Create Random Passwords

I had a need to repeatedly create random passwords of varying lengths. To satisfy this need, I wrote the following basic script.

function Get-RandomPW
{
    param
    (
        [int]$Length = 16
    )
    $arrChars = 'abcdefghkmnprstuvwxyzABCDEFGHKLMNPRSTUVWXYZ123456789!@#$%^&amp;*()-=_+'.ToCharArray()
    $sRandomString = -join $(1..$Length | Foreach-Object { Get-Random -InputObject $arrChars })
    return $sRandomString
}

On line 1, you can see I named my function Get-RandomPW which I did because I like following the standard Verb-Noun naming scheme that PowerShell functions and cmdlets are supposed to follow. On lines 3 through 6, I’m declaring my only parameter, $Length. $Length is an integer which will represent the length of the password we want. By default, I create a 16 character password.

On line 7, $arrChars is declared and assigned the value of all the valid characters for my password. I list all the characters in one big string and convert to a Char array because it’s easier to look at and manage, in my opinion.

On line 8, I finally build the password. For all the numbers between 1 and $Length, I’m getting a random item from $arrChars. The result of that is an array, so I use the -join method to create a string from the array. On line 9, I return the password I built.

Here’s what the script looks like in action.

PS C:\&gt; Get-RandomPW
ks1NWkgU4NLmeAv^

PS C:\&gt; Get-RandomPW -Length 10
LMHCLFE2Ds

PS C:\&gt; Get-RandomPW -Length 32
$76Gu3xRD$5GDgwe@nE_Ah#63ZSSd=+W

PS C:\&gt; 1..10 | Foreach-Object { Get-RandomPW -Length 8 }
FtU59d42
dvbpGx9f
&amp;&amp;2&amp;8K=x
@SRK$3m6
57A)*%Pc
RhEHAamX
mTfYV2cB
@h)GR1kb
%tUb^KZD
sxb^bZ)&amp;

 

Read More

Just Enough Administration (JEA) First Look

If you’re reading this, it means that Windows Server 2016 Technical Preview 4 is released (currently available on MSDN) and one of the new features that’s available is Just Enough Administration (JEA)! Until now, you could use DSC to play with JEA but now it’s baked into Windows Server 2016.

If you’re not sure what JEA is or does, check out this page published by Microsoft.

So how do you get started?

JEA gets put together like a module. There are a bunch of different ways to dive in, but for convenience, I’m just covering this one example. Build on it and learn for yourself how JEA can work for you specifically!

First things first, make a new directory in your modules folder and navigate to it.

$dir = 'C:\Windows\system32\WindowsPowerShell\v1.0\Modules\JEA-Test'
new-item -itemtype directory -path $dir
cd $dir

So far, so easy. Now, we’re going to use the brand new JEA cmdlets to configure what is basically our constrained endpoint.

New-PSSessionConfigurationFile -path "$dir\JEA-Test.pssc"

This PSSC is the first of two files we’re going to make. It’s a session config file that specifies the role mappings (we’ll get to roles in a second) and some other general config settings. A PSSC file looks like this.

@{

# Version number of the schema used for this document
SchemaVersion = '2.0.0.0'

# ID used to uniquely identify this document
GUID = 'c433f896-4241-4b12-b857-059a395c2d2b'

# Author of this document
Author = 'trayner'

# Description of the functionality provided by these settings
# Description = ''

# Session type defaults to apply for this session configuration. Can be 'RestrictedRemoteServer' (recommended), 'Empty', or 'Default'
SessionType = 'RestrictedRemoteServer'

# Directory to place session transcripts for this session configuration
# TranscriptDirectory = 'C:\Transcripts\'

# Whether to run this session configuration as the machine's (virtual) administrator account
# RunAsVirtualAccount = $true

# Groups associated with machine's (virtual) administrator account
# RunAsVirtualAccountGroups = 'Remote Desktop Users', 'Remote Management Users'

# Scripts to run when applied to a session
# ScriptsToProcess = 'C:\ConfigData\InitScript1.ps1', 'C:\ConfigData\InitScript2.ps1'

# User roles (security groups), and the role capabilities that should be applied to them when applied to a session
RoleDefinitions = @{ 'mvp-trayner\test users' = @{ RoleCapabilities = 'testers' } } 

}

If you’ve ever authored a PowerShell module before, this should look familiar. There’s only a few things you need to do here. The first is change the value for SessionType to RemoteRestrictedServer. You need to make it this in order to actually restrict the user connections.

You can enable RunAsVirtualAccount if you’re on an Active Directory Domain. I won’t get too deep into what virtual accounts do because my example is just on a standalone server.

The other important task to do is define the RoleDefinitions line. This is a hashtable where you set a group (in my case, local to my server) assigned to a “RoleCapability”. In this case, the role I’m assigning is just named “testers” and the local group on my server is named “test users”.

Save that and now it’s time to make a new directory. Roles must be in a “RoleCapabilities” folder within your module.

new-item -itemtype directory "$dir\RoleCapabilities"

Now we are going to continue using our awesome new JEA cmdlets to create a PowerShell Role Capabilities file.

New-PSRoleCapabilityFile -path "$dir\RoleCapabilities\testers.psrc"

It’s very important to note here that the name of my PSRC file is the same as the RoleCapability that I assigned in the PSSC file above.

PSRC files look like this. Let’s point out some of the key areas in this file and some of the tools you now have at your disposal.

Think of a PSRC as a giant white list. If you don’t explicitly allow something, it’s not going to happen. Because PSRCs all act as white lists, if you have users who are eligible for more than one PSRC (through more than one group membership/role assignment in a PSSC), the access a user gets is everything that’s white listed by any role the user is eligible for. That is to say, PSRCs merge if users have more than one that apply.

@{

# ID used to uniquely identify this document
GUID = '3e2ca105-db93-4442-acfd-037593c6c644'

# Author of this document
Author = 'trayner'

# Description of the functionality provided by these settings
# Description = ''

# Company associated with this document
CompanyName = 'Unknown'

# Copyright statement for this document
Copyright = '(c) 2015 trayner. All rights reserved.'

# Modules to import when applied to a session
# ModulesToImport = 'MyCustomModule', @{ ModuleName = 'MyCustomModule'; ModuleVersion = '1.0.0.0'; GUID = '4d30d5f0-cb16-4898-812d-f20a6c596bdf' }

# Aliases to make visible when applied to a session
# VisibleAliases = 'Item1', 'Item2'

# Cmdlets to make visible when applied to a session
VisibleCmdlets = 'Get-*', 'Measure-*', 'Select-Object', @{ Name= 'New-Item'; Parameters = @{ Name = 'ItemType'; ValidateSet = 'Directory' }, @{ Name = 'Force' }, @{ Name = 'Path'; ValidateSet = 'C:\Users\testguy\ONLYthis' } }

# Functions to make visible when applied to a session
# VisibleFunctions = 'Invoke-Function1', @{ Name = 'Invoke-Function2'; Parameters = @{ Name = 'Parameter1'; ValidateSet = 'Item1', 'Item2' }, @{ Name = 'Parameter2'; ValidatePattern = 'L*' } }

# External commands (scripts and applications) to make visible when applied to a session
VisibleExternalCommands = 'c:\scripts\this.ps1'

# Providers to make visible when applied to a session
# VisibleProviders = 'Item1', 'Item2'

# Scripts to run when applied to a session
# ScriptsToProcess = 'C:\ConfigData\InitScript1.ps1', 'C:\ConfigData\InitScript2.ps1'

# Aliases to be defined when applied to a session
#AliasDefinitions = @{ Name = 'test-alias'; Value = 'Get-ChildItem'}

# Functions to define when applied to a session
# FunctionDefinitions = @{ Name = 'MyFunction'; ScriptBlock = { param($MyInput) $MyInput } }

# Variables to define when applied to a session
# VariableDefinitions = @{ Name = 'Variable1'; Value = { 'Dynamic' + 'InitialValue' } }, @{ Name = 'Variable2'; Value = 'StaticInitialValue' }

# Environment variables to define when applied to a session
# EnvironmentVariables = @{ Variable1 = 'Value1'; Variable2 = 'Value2' }

# Type files (.ps1xml) to load when applied to a session
# TypesToProcess = 'C:\ConfigData\MyTypes.ps1xml', 'C:\ConfigData\OtherTypes.ps1xml'

# Format files (.ps1xml) to load when applied to a session
# FormatsToProcess = 'C:\ConfigData\MyFormats.ps1xml', 'C:\ConfigData\OtherFormats.ps1xml'

# Assemblies to load when applied to a session
# AssembliesToLoad = 'System.Web', 'System.OtherAssembly, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'

}

Let’s skip ahead to line 25. What I’m doing here is white listing any cmdlet that starts with Get- or Measure- as well as Select-Object. Inherently, any of the parameters and values for the parameters are whitelisted, too. I can hear you worrying, though. “What if a Get- command contains a method that allows you to write or set data? I don’t want that!” Well, rest assured. JEA runs in No Language mode which prevents users from doing any of those shenanigans.

Also in line 25, I’m doing something more specific. I’m including a hashtable. Why? Because I want to allow the New-Item cmdlet but only certain parameters and values. I’m allowing the ItemType parameter but only if the user sets it to Directory. I’m allowing Force, which doesn’t take a value. I’m also allowing the Path attribute, but, only a specific path. If a user tries to use the New-Item cmdlet but violates these rules, the user will get an error.

On line 19, I can import specific modules without opening up the Import-Module cmdlet. These modules are automatically imported when the session starts.

On line 28, we can make specific functions available to connecting users.

Line 31 is interesting. Here I’m making an individual script available to the connecting user. The script contains a bunch of commands that I haven’t white listed, so, is the user going to be able to run it? Yes. Yes they are. The user can run that script and the script will run correctly (assuming other permissions are in place) without having the individual cmdlets white listed. It is a bad idea to allow your restricted users to write over scripts you make available to them this way. 

On line 37, you can basically configure a login script. Line 40 lets you define custom aliases and line 43 lets you define custom functions that only exist in these sessions. Line 46 is for defining custom variables (like “$myorg = ‘ThmsRynr Co.”) which can be static or dynamic.

With these tools at your disposal, you can configure absolutely anything about a user’s session and experience. Sometimes, you might have to use a little creativity, but anything is possible here.

Lastly, you need to set up the JEA endpoint. You can also overwrite the default endpoint so every connection hits your JEA config but you may want to set up another unconstrained endpoint just for admins… just in case.

Register-PSSessionConfiguration -name 'JEA-Test' -path $dir

That’s it. You’re done. Holy, that was way too easy for how powerful it is. Now when a user wants to connect, they just run a command like this and they’re in a session limited like you want.

Enter-PSSession -ComputerName mvp-trayner -ConfigurationName JEA-Test

If they are in my local “Test Users” group, they’ll have the “testers” role applied and their session will be constrained like I described above. You’ll need to make sure your test users have permissions to remotely connect at all, though, otherwise the connection will be rejected before a JEA config is applied.

I can think of a bunch of use cases for JEA. For instance...

1. Network Admins
I'd like my network admins to be able to administer DHCP and DNS on our Windows servers which hold these roles without having carte blanche admin rights to everything else. I think this would involve limiting the cmdlets available to those including *DHCP* or *DNS*.
2. Certificate Management
We use the PSPKI module for interacting with our Enterprise PKI environment. For this role, I'd deploy the module and give users permissions to use only the PSPKI cmdlets. I'd use the Windows CA permissions/virtual groups to allow or disallow users manage CA, manage certificates, or just request certificates.
3. Code Promotion
Allowing people connecting via JEA to read/write only certain areas of a filesystem isn't practical. The way I'd get around this is to allow access to run only one script which performed the copy commands or prompted for additional info as required. You could mix this in with PowerShell Direct and promote code to a server in a DMZ without opening network holes or allowing admin access to a DMZ server.
4. Service Account for Patching
We have a series of scripts that apply a set of rules and logic to determine if a server needs to be patched or not. All it needs to do is perform some WMI queries, communicate with SCCM (which has the service installed to actually do the patching) and reboot the server. Instead, right now, that service account has full admin rights on the server.
5. Help Desk
Everybody's help desk is different but one job I'd like to send to my help desk is some limited Active Directory management. I'd auto-load the AD module and then give them access to very restricted cmdlets and some parameters. For instance, Get-ADUser and allow -Properties but only allow the memberof, lockedout, enabled and passwordlastset values. I might also allow them to add users to groups but only if the group was in a certain OU or matched a certain string (ie: if the group ends in "distribution list").
6. Print Operators
We have a group of staff on-site 24/7 that service a giant high speed print device. There are a number of servers that send it jobs and many are sensitive. I'd like to give the print operators group permissions to reach out and touch these servers only for the purposes of managing print jobs.
7. Hyper-V Admins/Host Management
These guys need the Hyper-V module and commands within it as well as some limited rights on the host, like Get WMI/CIM objects but not the ability to set WMI/CIM objects.

Get playing!

The possibilities of what you can do with JEA are endless. While the DevOps mentality is flourishing, the need to enable access to different systems is growing. With JEA, you can enable whatever kind of access you need, without enabling a whole bunch of access you don’t. That’s probably why it’s called “Just Enough Administration”.

Read More

Using PowerShell To Simulate A Ransomware Attack

Disclaimer: There are tons of different ransomware variants which behave in tons of different ways. This is an example of simulating just one of those behaviors - one that I’ve found to be common.

 

It’s a commonly held belief that there’s nothing you can do to guarantee you’ll never be hit by a ransomware attack, you can only be prepared with systems and processes to detect one, stop it, and recover from it. If you’re putting in some sort of system to detect a ransomware attack, you’d probably be wise to test it, but how? Installing ransomware is not something I’d recommend.

A common way of detecting a ransomware attack is monitoring a file system for a series of conditions. This is one such way you might configure these conditions:

  1. A user modifies more than 100 files
  2. A user renames more than 100 files
  3. 1 and 2 happen in under 60 seconds

This works nicely because ransomware will usually encrypt a file (modifying it) and append an extension (renaming it) in a short amount of time. You might have some false positives with this, or you might want to make the conditions more strict or lenient but hey, it’s my blog and this is what I am testing with.

So how do you simulate this behavior with PowerShell? Like this.

$strDir = "C:\temp\test1\"
GCI $strDir | Remove-Item -Force
1..200 | % { $strPath = $strDir + $_ + ".txt"; "something" | Out-File $strPath | Out-Null }
Measure-Command { 1..101 | % { $strPath = $strDir + $_ + ".txt"; $strNewPath = $strPath + ".chng"; "changed" | Out-File -Append $strPath; Rename-Item -Path $strPath -NewName $strNewPath } }

Lines 1, 2 and 3 setup the environment. $strDir is the location we’re monitoring for ransomware attacks (or a test directory in this case). Line 2 empties the test directory which you probably don’t want to do indiscriminately in a production area but I want to do in my test area.

Line 3 creates 200 txt files in $strDir. 1..200 is a slick way of writing all the numbers between 1 and 200 inclusive. Try it yourself in a PowerShell console. Then, for each of those numbers, we’re creating a file and suppressing the output.

Line 4 is the ransomware simulation. For 101 files, we’re making a variable $strPath which is an individual file we created in line 3. We’re also crafting a new path stored in $strNewPath which is the same file but with an extension. Then I’m changing the contents of the file by writing “changed” inside it. Finally, I rename the file. The whole thing is wrapped in a Measure-Command block so I can see how long it takes. On my test system, the ransomware part took 688 ms.

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 688
Ticks             : 6887630
TotalDays         : 7.97179398148148E-06
TotalHours        : 0.000191323055555556
TotalMinutes      : 0.0114793833333333
TotalSeconds      : 0.688763
TotalMilliseconds : 688.763

There you go! Try it yourself and see if you can detect this simulated ransomware attack.

Read More

Quick Tip - Create New LPR Printers Using PowerShell

There are a bunch of overloads for Add-Printer and Add-PrinterPort to accommodate different kinds of printers and ports. I found it tough, however, to find real examples of how to use these cmdlets to add LPR printers and ports. Not TCP/IP, not TCPLPR, not local ports. I figured it out, though, and now here’s how I did it.

foreach ($printer in $(Get-Content -Path 'c:\temp\printers.txt')
{
    Add-PrinterPort -ComputerName PrintServer -PrinterName $printer -HostName 'PrinterHostName'
    Add-Printer -ComputerName PrintServer -DriverName 'Name Of Your Driver' -PortName "PrinterHostName:$printer" -Name $printer
}

There are no real surprises here. It’s just a matter of finding the right combinations of parameters and their values to make LPR printers and ports happen. In this example, I’m creating a bunch of them out of a list I have in a file.

Read More

Splitting Strings On Escaped Characters In PowerShell - Literal vs. Dynamic Content

Before we get into this post, here’s a little required reading: http://blogs.technet.com/b/heyscriptingguy/archive/2015/06/20/weekend-scripter-understanding-quotation-marks-in-powershell.aspx

This is a “Hey, Scripting Guy!” post by Don Walker about using single vs. double quotes to wrap strings and other items ( ‘ vs. “ ). The bottom line is that single quotes should be your default go-to and denote a literal string. Double quotes are only to be used when dynamic content is involved. It’s all explained quite clearly in the post linked above.

Awesome information, but, it doesn’t talk about escape characters. In PowerShell the backtick character ( ` ) - the one you hit along with shift to get the tilde character ( ~ ) on most keyboards - is what’s known as an escape character. Here’s some more reading on escape characters if you’re unfamiliar.

Now, what if I have something like this?

$Body = 'Thank you for stopping by to see us.

It was nice to see you.

Stop by again soon.

Thanks!'

It’s just a multi-line string with blank lines in between each of the lines with content. Now, what if I wanted to keep each of the content lines on it’s own line while removing all the lines that are blank? Well, since $Body is one big multi-line string, I can split it on “new line”. Using escape characters in PowerShell, to denote a new line we just type:

`r`n

So can I do this?

Write-Host 'Using single quotes' -ForegroundColor Green
$Body.split('`r`n') | % { if (-not [string]::IsNullOrWhiteSpace($_)) { $_ } }

I’m splitting $Body on each new line, and for each line, if it is not null or white space (using some of the information from this post), I write it. I’m using single quotes to wrap the new line marker to split up $Body. Well, unfortunately, the output looks like this.

Splitting Strings 1

Well, that’s not exactly what I was hoping for. Instead of splitting $Body on a new line, it looks like it’s split it on the letters n and r. It turns out that the escape character, like variables and the output from commands, is dynamic content. To do what I’m trying to do, the command needs to look like this.

Write-Host 'Using double quotes' -ForegroundColor Green
$Body.split("`r`n") | % { if (-not [string]::IsNullOrWhiteSpace($_)) { $_ } }

The only difference is the value in the split command. Instead of single quotes I’ve got double quotes wrapping the new line marker. Now the output looks like this.

Splitting Strings 2

Perfect! So remember, escape characters are dynamic content. They are not considered part of a literal string.

Read More

Quick Tip - Which Of These Groups Are These Users Members Of?

Here’s a quick PowerShell function I put together that you might like to use or pick pieces from. The point of the function is to take a list of usernames and a list of groups and tell you which users are members of which groups, including through nested group membership.

#requires -Version 1 -Modules ActiveDirectory
function Test-IsGroupMember
{
    param (
        [Parameter(Mandatory=$True,
                Position=1,
        ValueFromPipeline=$True)]
        [Object]$Usernames,
        [Parameter(Mandatory=$True,
                Position=2,
        ValueFromPipeline=$True)]
        [Object]$Groups
    )

    foreach ($strGroup in $Groups) {
        $arrMembers = @()
        $arrMembers = (Get-ADGroupMember -Identity $strGroup -Recursive).SamAccountName
        Write-Output "$strGroup has $($arrMembers.count) members"
        $Usernames | % { if ($arrMembers -contains $_) { write-host " * $_ is a member of $strGroup" } }
        Write-Output ''
    }
}

As you can see, this function requires the ActiveDirectory PowerShell module and the function is named Test-IsGroupMember. It takes two parameters called Usernames and Groups. Both are “object” types so they could be an array or a string. I didn’t want to make overloaded versions of a script this simple so I took this shortcut. It’s expected that the values in Usernames and Groups will be SamAccountNames.

On Line 15, I start the work. For all of the groups you pass the function, it determines the recursive group members and extracts the SamAccountName attribute of the members returned. Then to the output stream, we write that the currently evaluated group has a number of members. On Line 19, we check to see if any of the usernames in the Usernames parameter are contained within the members of the group. I could have used a Compare-Object here but I didn’t. If the user is present in both arrays, we report back.

Here are some examples of how I like using this function.

#Take an array of users and an array of groups to see which users are in which groups
Test-IsGroupMember @('user1','user2','user3','ThmsRynr') @('Group1','Group2','Group3')

#See if ThmsRynr is a (nested) member of SomeGroup
Test-IsGroupMember ThmsRynr SomeGroup

#See if all the members of InterestingGroup are members of any group whose name matches *Keyword*
Test-IsGroupMember -Usernames (Get-ADGroupMember InterestingGroup).SamAccountName -Groups (Get-ADGroup -filter "Name -like '*Keyword*'").SamAccountName

Pretty flexible.

Read More

Sharing MVPDays YEG Presentation Material

Last week, I had the distinct pleasure of speaking twice at MVPDays in Edmonton. I did two sessions. The first was titled “PowerShell 5.0 - A Brave New World” where Sean Kearney and I introduced the tip of the iceberg that is all the new stuff in PowerShell 5.0. The other session I did was on my own, titled “Going From PowerShell Newbie to PowerShell Ninja”. In the latter session, I promised to share some things today, and I’m here to deliver.

OPML File of Blogs I Follow - This is a file that you can import into any modern RSS reader. I follow 40+ blogs on PowerShell, technology and related topics. Feel free to take a look through the blogs I’ve endorsed here and follow all of them, or just the ones that make sense to you. Among these blogs are the premier resources I mentioned in my session: Hey, Scripting Guy! and PowerShell.org.

My PowerShell People Twitter List - If you’re looking to find people on Twitter who are knowledgeable about PowerShell, take a look at this list I curate. You can follow the whole list or take a look at these people I personally follow and recommend. Remember, Twitter is a great way to get introduced to new resources and connect with like-minded people. Follow the #PowerShell hashtag and join in for #MVPHour every other Monday.

Subscribe to the EMUG Mailing List - If you live in the Edmonton area and enjoyed MVPDays, you should consider signing up for the Edmonton Microsoft User Group mailing list, if you aren’t signed up already. This is the best way to stay informed about when similar events will be occurring. In fact, EMUG hosts several events throughout the year just for our members. Check out PowerShellGroup.org to find other regional PowerShell user groups who share their content, or join the virtual group.

And, of course, you can find me on Twitter (best way to reach me) and LinkedIn.

Good luck on your journey from PowerShell Newbie to PowerShell Ninja, and happy scripting!

Read More

Quick Script Share - Tell Me Everyone With Access To This Directory

Trying something new. Here’s a quick script I threw together to satisfy a request along the lines of “tell me all the users who have access to this directory”. It’s easy to see all the groups that have access just by right-clicking a directory and going to the Security tab but it’s a pain to get all the users who belong to those groups – especially if there are nested groups (within nested groups, within nested groups). Hence, this script. In addition to the ActiveDirectory PowerShell module, you of course need to be able to read the ACL on the directory you are interested in so use your admin account.

In this experimental post, I’m not going to break down the script, but instead, I’ve quickly commented in-line most of the tricky bits. I think it’s pretty straight forward, but, I wrote it. Let me know what you think.

#requires -Version 1 -Modules ActiveDirectory

#function to return the SamAccountNames of all the users in a group - if the group is empty, return the name of the group
Function Get-NestedGroupMember {
    param
    (
        [Parameter(Mandatory=$True,
                Position=1,
        ValueFromPipeline=$True)]
        [string]$Group
    )
 
    $Users = @(Get-ADGroupMember $group -recursive).SamAccountName
    if ($Users) { return $Users }
    else { return $Group }
} 

#function to enumerate types of access held by individuals to a directory
Function Get-Access {
    param
    (
        [Parameter(Mandatory=$True,
                Position=1,
        ValueFromPipeline=$True)]
        [string]$Dir
    )
    
    #record the current erroractionpreference so we can set it back later
    $OldEAP = $ErrorActionPreference
    
    #set erroracctionpreference to silently continue so we ignore errors from empty groups and weird broken ACLs
    $ErrorActionPreference = 'silentlycontinue'
    
    #get the full ACL of the directory from the parameter
    $ACL = Get-Acl $Dir
    
    #retrieve the Access attribute
    $arrAccess = @($ACL.Access)
    
    #separate the IdentityReference and FileSystemRights attributes from within the Access attribute
    $arrIdentRef = $arrAccess | select-object IdentityReference, FileSystemRights
    
    #for each item in the access attribute of the ACL, write the type of filesystemrights associated with the entry and get the recursive group membership
    $arrIdentRef | % { Write-Output "ACCESS $($_.FileSystemRights) HELD BY: `r`n$(Get-NestedGroupMember $_.IdentityReference.Value.ToString().Split('\')[-1])"; Write-Output "`r`n`r`n" }
    
    #set the erroractionpreference back to whatever it was before we started
    $ErrorActionPreference = $OldEAP
}

Get-Access '\\host\share\some folder'

 

Read More

My September 2015 Scripting Puzzle Solution

If you haven’t heard, PowerShell.org is taking the lead on organizing the PowerShell Scripting Games. There’s a new format that involves monthly puzzles. Here’s their post on September’s puzzle: http://powershell.org/wp/2015/09/05/september-2015-scripting-games-puzzle/

Here is my solution. The summarized instructions are: “You have a CSV with one column, “machinename”, and you need to return the friendly OS name for each. They’re a mix of machines dating back to WinXP. All have PowerShell 2.0 or better and WinRM is open between you and each host. Try to limit your usage of curly braces.”

I did this.

Import-Csv -Path $InputFile | ForEach-Object -Process { Get-WmiObject Win32_OperatingSystem -ComputerName $_.MachineName | Select-Object -Property PSComputerName, Caption } | Export-Csv -Path $OutputFile -NoTypeInformation

It’s pretty verbose but I made it that way for readability. I use a grand total of one pair of curly braces in the solution, which I hope satisfies the definition of “limited”. What I’m doing is importing the CSV, which is located wherever $InputFile is, and for each of the lines in that CSV, I am performing a task on the computer indicated. That task is to get the Win32_OperatingSystem WMI Object which contains all kinds of neat info about a system’s OS. Of the data returned, I am selecting the PSComputerName, which should equal the same value as the line in the input file (but doesn’t cost me any curly braces to return) and the Caption, which is the friendly name of the OS. I export that into $OutputFile’s location.

Fun times!

Read More

PowerShell Function To Get Time Since A User's Password Was Last Changed

Here’s a small function I put in my PowerShell profile to tell me how long it’s been since an AD user’s password was last changed. You do know how to change your PowerShell profile, don’t you? Just type the following in a PowerShell prompt.

notepad $profile

That will open your PowerShell profile in Notepad. You might be asked to create one if you don’t have anything there yet. Then just save that and next time you open PowerShell, whatever code you have in your profile will be executed. The code I’m putting in there right now is the definition for this function.

function Get-TimeSinceLastPWSet {
    [CmdletBinding()]
    param (
        [Parameter(Mandatory=$True,
        Position=1,
        ValueFromPipeline=$True)]
        [string]$Username
    )
    
    $tsSinceLastPWSet = New-TimeSpan $(get-aduser $Username -Properties Passwordlastset).Passwordlastset $(get-date)
    return $tsSinceLastPWSet
    }

It’s pretty straight forward. My function is named Get-TimeSinceLastPWSet and takes one parameter, the username of the user we’re interested in. On Line 10, the actual work gets done. I’m making a new TimeSpan object assigned to $tsSinceLastPWSet which is the time between the user’s Passwordlastset AD attribute and the current date/time.

Since the function returns a timespan object, you can manipulate it like this to get more friendly output. (More info on Composite Formatting from MSDN. No PowerShell examples but it looks a lot like the C#.)

Get-TimeSinceLastPWSet ThmsRynr | % { '{0:dd} days, {0:hh} hours' -f $_ }

This will give you output that simply looks like “10 days, 12 hours” instead of the generic list formatted output you get when you write out a timespan object. I’ve actually made that the default behavior of the function I put in my personal profile because that’s more valuable to me.

Mine looks like this.

function Get-TimeSinceLastPWSet {
    [CmdletBinding()]
    param (
        [Parameter(Mandatory=$True,
                Position=1,
        ValueFromPipeline=$True)]
        [string]$Username
    )
    
    $tsSinceLastPWSet = New-TimeSpan $(get-aduser $Username -Properties Passwordlastset).Passwordlastset $(get-date)
    $strFormatted = '{0:dd} days, {0:hh} hours' -f $tsSinceLastPWSet
    return $strFormatted
    }

Just a small tweak. It returns that nice-to-look-at-string instead of the timespan object.

Read More

Detecting An Exchange Management Shell Connection

You don’t log onto an Exchange server via RDP and open the Exchange Management Shell application when you want to do Exchange-PowerShell things, do you? You follow the steps in my Opening A Remote Exchange Management Shell post, right?

But how do you detect if if you have an open remote connection or not? Well there’s a bunch of different ways so here’s an easy one. First, though, we need to understand a couple things about what happens when you open a remote Exchange Management Shell connection.

Here’s what the output of my Get-Module cmdlet looks like before I do anything Exchange-y.

[caption id=”attachment_243” align=”alignnone” width=”1217”]Get-Module before anything Exchange related Get-Module before anything Exchange related (click for larger)[/caption]

I’m in ISE, I have the AD cmdlets added. Nothing going on here is too crazy. Now here’s what it looks like after I open a remote Exchange Management Shell connection like I told you how to do in the post linked above.

[caption id=”attachment_244” align=”alignnone” width=”1167”]Get-Module after adding Exchange Management Shell Get-Module after adding Exchange Management Shell (click for larger)[/caption]

Notice that the Exchange stuff gets added under a tmp name? And that it’s different every time? That doesn’t exactly make it easy to detect. With the ActiveDirectory cmdlets you can just run Get-Module -name ActiveDirectory and it will either return something or not. Easy. How are you supposed to do that in a predictable, repeatable fashion for Exchange, especially since any other remote shells created to other services in the same manner may also be added with a tmp_ prefix?

In order to figure out how we can determine if we have a module added that belongs to a remote Exchange Management Shell, let’s take a closer look at the tmp module that just got added.

[caption id=”attachment_245” align=”alignnone” width=”1158”]Details of the last module added Details of the last module added (click for larger)[/caption]

At first glance, we’re obviously not going to be able to use the Name or Path attributes to identify remote Exchange Management Shell connections. ModuleType, Version, most of the others all look useless for us here. What looks useful, though, is the Description attribute which reads “Implicit remoting for http://my-exchange-server.fqdn/powershell”. That, we can work with. Here’s my code to tell me if I have a module added whose description is for a remote session to my Exchange server.

get-module | select Description | ? { $_ -match "my-exchange-server" }

The code will either return the description of the module if it’s added, or null. You can work with it like this.

$ExchAdded = get-module | select Description | ? { $_ -match "my-exchange-server" }
if ($ExchAdded) { write-host "Yes, added" } else { write-host "No, not added" }

Check it out.

[caption id=”attachment_246” align=”alignnone” width=”760”]Code at work Code at work (click for larger)[/caption]

Read More

My August 2015 Scripting Puzzle Solution

If you haven’t heard, PowerShell.org is taking the lead on organizing the PowerShell Scripting Games. There’s a new format that involves monthly puzzles. Here’s their post on August’s puzzle: http://powershell.org/wp/2015/08/01/august-2015-scripting-games-puzzle/

Here is my solution. The instructions are to get information back from a JSON endpoint (read more about it in the link above).

First things first, here’s how I did the one-liner part.

Invoke-WebRequest http://www.telize.com/geoip | ConvertFrom-Json | Format-Table Longitude,Latitude,Continent_Code,Timezone -AutoSize

This brings back exactly what Mr. Don Jones has asked for. I’m using the Invoke-WebRequest cmdlet to make a web request to that IP and converting what’s returned using ConvertFrom-Json. Then it’s just a matter of formatting the output and selecting only the items we care about for this puzzle.

Alright, that wasn’t so bad. How about the next challenge? I wrote the following function.They asked for an advanced function, but I skipped the comment based help and the begin/process blocks. I could clean up how I work with the $IP parameter a bit, but, this is easier to look at and explain.

function Get-GeoIP
{
    param
    (
        [array]$Attributes = '*',
        [IPAddress]$IP
    )
    Try
    {
        if ($IP) { Invoke-WebRequest "http://www.telize.com/geoip/$IP" | ConvertFrom-Json | Select-Object $Attributes }
        else { Invoke-WebRequest 'http://www.telize.com/geoip' | ConvertFrom-Json | Select-Object $Attributes }
    }
    
    Catch [Exception]
    {
        throw $_.Exception.Message
    }
}

I’ve declared two parameters, $Attributes and $IP. $Attributes are the attributes we want to return. In our puzzle instructions, we’re asked for Longitude, Latitude, Continent_Code and Timezone but you could use this function to get any of them. By default, the function will return all attributes. $IP is another IP address that we can get data for. If you don’t specify one, the function will retrieve data for the client’s IP. Otherwise, we can get data for an IP that isn’t the one we’re making our request from.

Here are a couple examples of the function in action.

PS C:\&gt; Get-GeoIP


longitude      : redacted
latitude       : redacted
asn            : redacted
offset         : redacted
ip             : redacted
area_code      : 0
continent_code : NA
dma_code       : 0
city           : Edmonton
timezone       : America/Edmonton
region         : Alberta
country_code   : CA
isp            : redacted
postal_code    : redacted
country        : Canada
country_code3  : CAN
region_code    : AB

Here, I’m just running the script with no parameters set. It gets all the data back from my IP. I’ve sanitized a lot of the data returned for the purpose of publishing this post but it was all returned correctly.

PS C:\&gt; Get-GeoIP -Attributes @('Longitude','Latitude','Continent_Code','Timezone') -IP 104.28.14.25 | Format-Table -AutoSize

longitude latitude continent_code timezone           
--------- -------- -------------- --------           
-122.3933  37.7697 NA             America/Los_Angeles

Here, I asked for the attributes from the puzzle and specified the IP address for PowerShell.org. You can see that it returned exactly what we’d expect.

Finally, the challenge asks us to hit another public JSON endpoint. I don’t have a favorite but found one that shows you your HTTP request information. Here is what it looks like in action.

PS C:\&gt; Invoke-WebRequest 'http://headers.jsontest.com/' | ConvertFrom-Json | Format-Table -AutoSize

Host                 User-Agent                                                           
----                 ----------                                                           
headers.jsontest.com Mozilla/5.0 (Windows NT; Windows NT 6.2; en-US) WindowsPowerShell/4.0

Interesting user agent.

Read More

How Do You Tell If Two Directories Have The Same Permissions?

The title of this post is a bit funny. The answer is obviously “You can pop both folders open in Windows Explorer, right click, Properties and compare the security tab!” right? Well, you can, but what about folders that have a lot of complicated permissions? What if you want to compare 100 folders? I don’t know about you but I’m not opening 100 folders and comparing the permissions on them all manually. If only PowerShell could help us! Well it can.

In this example, I have three subdirectories in my c:\temp folder. They’re named test1, test2 and test3. Test1 and test2 have the same permissions but test3 has different permissions than the first two.

The first command to get familiar with is the Get-ACL command. ACL stands for Access Control List. This command may take different objects as parameters but one type of object is a path to a directory. Do a Get-ACL someDirectory | Get-Member and you’ll see the huge number of methods and properties that get returned. We’re only really interested in one property though, the Access property.

(Get-Acl c:\temp\test1).Access

#returns
FileSystemRights  : FullControl
AccessControlType : Allow
IdentityReference : BUILTIN\Administrators
IsInherited       : True
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

FileSystemRights  : FullControl
AccessControlType : Allow
IdentityReference : NT AUTHORITY\SYSTEM
IsInherited       : True
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

FileSystemRights  : ReadAndExecute, Synchronize
AccessControlType : Allow
IdentityReference : BUILTIN\Users
IsInherited       : True
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

FileSystemRights  : Modify, Synchronize
AccessControlType : Allow
IdentityReference : NT AUTHORITY\Authenticated Users
IsInherited       : True
InheritanceFlags  : None
PropagationFlags  : None

Look at that. A list of all the different permissions on the folder we care about! Now all we have to do is compare this ACL to the ACLs of other directories. For this, why not simply use the Compare-Object cmdlet? Here’s the full script to compare three folders and commands. I’ll break it all down.

$ACLone = get-acl "C:\temp\test1"
$ACLtwo = get-acl "C:\temp\test2"
$ACLthree = get-acl "C:\temp\test3"
write-host "Compare 1 and 2 ----------------------"
Compare-Object -referenceobject $ACLone -differenceobject $ACLtwo -Property access | select sideindicator -ExpandProperty access | ft
write-host "Compare 1 and 3 ----------------------"
Compare-Object -referenceobject $ACLone -differenceobject $ACLthree -Property access | select sideindicator -ExpandProperty access | ft

The first three lines are just getting the ACLs for the three directories I care about and storing the values in variables. There’s tons of better ways to get and organize that information but this way lays it out nicely for this example.

Line 5 is our first Compare-Object command and it’s actually pretty intuitive, in my opinion. It’s taking a reference object ($ACLone) and comparing to a difference object ($ACLtwo) and it’s comparing the Access property. This will return the differences between the two ACLs which is totally our goal. The problem is it looks kind of ugly and useless so I pipe the result into a select command. I’m expanding the Access property so you can see exactly which items are different and formatting it as a table.

On lines 5 and 7 you’ll see I’m also selecting a property called SideIndicator. What’s that? Well, when you compare two objects, in addition to seeing a list of differences, don’t you also want to see which object has the different values? SideIndicator is either => or <= depending on which object has the unique value. I’ll explain.

Here is the output of line 7 (comparing two directories with different ACLs). It’s snipped and edited but you’ll see the important parts and yours won’t look too different.

[caption id=”attachment_179” align=”aligncenter” width=”989”]4-6-2015 1-35-28 PM Click for larger[/caption]

The side indicator column points left or right. This indicates whether the unique value is on the reference object or the difference object. Arrows pointing left indicate items on the reference object only, arrows pointing to the right indicate items on the difference object only.

There you go! You can use this and scale it to programatically compare file system permissions using PowerShell.

You can download a script I published for this purpose from the TechNet Script Center.

Read More

Quick Tip - Searching Exchange Message Tracking Logs (Get Results From Every Server)

When you use the Get-MessageTrackingLog cmdlet, by default, it only searches for messages/events on the server that you’re connected to (see my post on creating connections to Exchange). That’s not great in a multi-server environment. I want results from every server.

My solution is the following.

$results = $null
get-transportservice | foreach-object { $results += Get-MessageTrackingLog -server $_.Name -start (get-date).addhours(-1) -end (get-date) -resultsize unlimited | Select-Object -Property eventid,serverhostname,sender,recipients,messagesubject,timestamp }
$results | Sort-Object -Property Timestamp | ft 

The Get-TransportService cmdlet gets a list of all the transport servers in your infrastructure. For each of the servers we get back, I’m running the Get-MessageTrackingLog cmdlet and appending the results to a $results variable. I’m taking that results collection and sorting it chronologically.

Read More

My July 2015 Scripting Puzzle Solution

If you haven’t heard, PowerShell.org is taking the lead on organizing the PowerShell Scripting Games. There’s a new format that involves monthly puzzles. Here’s their post on July’s puzzle: http://powershell.org/wp/2015/07/04/2015-july-scripting-games-puzzle/

Here’s my solution. I did the extra challenges, as well except for the optional “go obscure, use aliases” challenge. I didn’t submit my solution, I just do them for funsies and to share my solutions on my blog.

$Computers = @('comp1','comp2'); Get-WmiObject -Class Win32_OperatingSystem -ComputerName $Computers | Format-Table PSComputerName,ServicePackMajorVersion,Version,@{N='BIOSSerial';E={$(Get-WMIObject -Class Win32_BIOS -ComputerName $_.PSComputerName).SerialNumber}}

The first thing I’m doing is defining an array of computer names which could have come from anywhere. Then, technically on the next line since there’s a semicolon, I have my command. It’s just a Get-WMIObject on the Win32_OperatingSystem class. The ComputerName parameter takes an array which is basically a built in ForEach-Object loop, satisfying the second and third challenge requirements. I pipe the results into a Format-Table command for easy reading but we’re not out of the woods yet. I select the PSComputername, ServicePackMajorVersion and Version properties pretty obviously but the last item, BIOSSerial looks strange.

The SerialNumber property in the Win32_OperatingSystem  WMI object is different than the SerialNumber property in the Win32_BIOS WMI object. The puzzle clearly requests the BIOS serial number, so I created a custom table column which retrieved the BIOS serial number. Technically speaking the only semicolon I use in this solution comes from the custom table command. I’m not counting the one that separates the definition of $Computers from the real command. I could have changed it to look like this, but I thought my solution was more readable.

Get-WmiObject -Class Win32_OperatingSystem -ComputerName @('comp1','comp2')

Good puzzle! I don’t do a ton with WMI or custom tables so this was weird for me, even if it was a relatively simple puzzle.

Read More

Get Random Lines From A File (Or Random Files From A Directory... Or Random Item From Any Collection)

Don’t ask me why but I recently had a need to get a random line from a text file. There’s a small piece of strange behavior that I came across with the cmdlet I chose to use: Get-RandomGet-Random does what it sounds like. It’s commonly used for getting random numbers (see this post I wrote a while ago about a gotcha with this behavior) but you can also pass it an input object.

For this example, I have a file in my c:\temp folder named random.txt. It’s contents just look like this.

so
many
random
words
to
choose
from
which
one
will
be
picked?

So since Get-Random includes an -InputObject parameter, I should just be able to do the following, right?

Get-Random -InputObject C:\temp\random.txt

Well, if you were hoping it would be that easy, I’m afraid I’ve got some bad news. Every time you run this command, the InputObject specified is always the value returned.

getrandom1

Well that’s not very helpful for a guy looking for a random line from the file. Turns out that -InputObject is looking for a collection of items, it’s not doing the work of examining the path to the file and extracting the data from it. That’s easy enough to get around. We’ll just do that work ourselves.

Get-Random -InputObject (get-content C:\temp\random.txt)

There we go. Get a random item from the collection returned by Get-Content C:\Temp\Random.txt. Then you get output like this.

getrandom2

You could get a random file from a directory like this.

Get-Random -InputObject (get-childitem c:\temp\)

Or, indeed, pass any array or hash table. Here’s an example of getting a random property from the $Host variable.

$Host | Select-Object (Get-Random -InputObject ($Host | Get-Member -MemberType Property).Name)

getrandom3

 

 

 

Read More

Quick Tip - Strip Empty Lines Out Of A File

Here’s a quick one-liner that will remove all of the blank lines from a file.

get-content $PathToInput | % { if (-not [string]::IsNullOrWhiteSpace($_)) { $_ | out-file -append $PathToOutput } }

The first thing I do is get the content of the input file. This returns an array of each line in the file which I pipe into a foreach-object loop (alias %). In the if block, I’m detecting if the currently evaluated item is null or just white space. If it isn’t, I append it to the output file.

Read More

How's your Windows Server 2003 migration going? Does that question scare you?

Remember 2003? 2003 was a good year. Camera phones got popular, XBox took off, and I was a 14 year old in 9th grade. 2003 was also, obviously, the year that Microsoft released Windows Server 2003. Are you still running it? You shouldn’t be, but I bet lots of you are. That should scare you because in less than six weeks from the time of this post, on July 14, 2015, Microsoft is ending support for Windows Server 2003. If you’re not done your Windows Server 2003 migration to newer operating systems (Windows Server 2012 R2 is an excellent choice), or worse - not even started, you could face some very serious consequences. Let’s answer a few questions you might have about that.

What does it mean to be unsupported?

In case “end of support” isn’t clear, here’s some of the highlights from the long list of concerns outlined in this IDC white paper on why you should upgrade (pdf). There’s tons of reasons but these were the ones that resonated with me.

  • Elimination of security fixes.

Holy smokes. No more patches? For a second that almost sounds like a good thing, right? You’re probably tired of patching servers. But, think of the consequences and implications of that. No more patches is a terrible, scary, awful thing. If I need to tell you why, you may consider a different career than the one that brought you to my blog. If you ever want to pass another audit, you better be receiving and applying security fixes for all your products, especially ones as fundamental as your Windows OSes.

  • Lack of support.

Do you ever call Premier Support? Read Technet blogs or forums? Microsoft is shutting down support for Windows Server 2003 once it hits end of life. If you want help upgrading, you better get it now because after the end of life date, it might be a challenge to get.

Saying “I can put this off, I’m just going to buy extended support!” is the wrong attitude to have. First, you could buy an Egyptian pyramid for the amount of money that extended support is going to cost. Second, all you’re doing is delaying the inevitable. You have to do this. Do it now. It’s going to hurt more to put it off and do it later.

Okay, so there are some good reasons to get off Windows Server 2003 BUT are there any good reasons to get on Windows Server 2012 R2?

There’s tons. Windows Server 2012 R2 came out Q4 2013 and is the result of decades of learning, improvement, technological landscape shifting, development and a bunch of other buzz-verbs that all mean that it’s better. It’s better. Windows Server 2012 R2 is better than Windows Server 2003. Here’s just a few articles that support that statement.

If you look at all, you’ll find thousands more articles, slides, posts, tweets, talks and more on the benefits and features of Windows Server 2012 R2 over its predecessors.

Upgrading is so intimidating. I need help! Where can I get some?

Microsoft has your back on upgrading and migrating. There are lots of guides and articles on these topics but Microsoft has assembled, in my opinion, the best resource hub out there. Did you click that link? It takes you to the page with all the resources. Click one of these links to go to that page. I can’t overstate how important I think it is that you go to this page and read about the resources to help you migrate away from Windows Server 2003. All the links in this paragraph go to the same page. This is the page: https://www.microsoft.com/en-ca/server-cloud/products/windows-server-2003/default.aspx . It’s in your very best interest to go there and check out what’s there. Need the link one more time? Here.

Does it feel like I’m using this subsection of this post to direct you to Microsoft’s page with tons of resources you can use to make your migration possible, if not easy? It’s because I am. There’s tons of other resources out there, too, and they are a simple search away.

I get it. I want to upgrade. I've been pushing my organization to upgrade but I can't seem to get permission. What can I do?

Surely I’ve convinced you of the many great reasons to migrate away from Windows Server 2003 to Windows Server 2012 R2. These arguments make sense for an IT Pro but maybe not for an executive, business people, or sometimes even to a developer. Here are a few of the common ways I see resistance and my suggestions to overcoming them. Of course, every organization’s politics are different and you may need to figure it out yourself.

  • We have App XYZ that only runs on Windows Server 2003. It's crucial to our business. There's no new version.

Respectfully, if this is the honest to goodness truth for your organization, you might be on the Blockbuster/Kodak path of sustainability. Read this Wikipedia article on the theory of Diffusion of Innovations. Take special note of chart that describes the different stages: Innovators, Early Adopters, Early Majority, Late Majority, and Laggards. You don’t have to adopt every new innovation that comes across your desk, but if your entire business is dependent on a technology or product that is about to reach end of life, you’re in trouble. You’re already in the laggard stage of the adoption process if you’re still not off Windows Server 2003. Just don’t fall off the chart completely - get migrating!

There comes a point where you’re not upgrading to gain an advantage, but to catch up to competitors who have already surpassed you.

  • App XYZ is crucial to our business. There's a new version but we can't afford the down time to upgrade.

This one is easier to work with than the last one. Attack this resistance from two sides. First, reiterate the importance of upgrading and all the bad things that will happen if you don’t. Second, and most importantly, find business reasons that make migrating to Windows Server 2012 R2 or the new version of App XYZ desirable to your specific stakeholders. Often with executives and business groups, it’s even more important to PULL them towards something new as it is to PUSH them away from something old.

To address the downtime concerns, put effort into making a plan that makes the downtime as short and painless as possible. Do a side-by-side migration. Do the cut over at 3 in the morning when your customers are all asleep. Find a way to make the downtime as tolerable as possible.

  • We don't need new features. We accept the risk of running in an unsupported fashion. It's just not worth our time to migrate.

This is a naive attitude, in my opinion. If you can’t find a creative way to improve anything within your organization with even one new feature in Windows Server 2012 R2, you’re not looking. A willingness to accept the risk of running unsupported demonstrates a lack of complete understanding of the risk involved with doing so. What would your customers say if you told them that your systems don’t receive security updates any more? If you get resistance like this, you need to find a reason to pull your stakeholders towards the newer technologies and make sure they’re clear on the risks of maintaining status quo.

Alright, I'm ready to take this on! Now how about a summary of some kind?

Glad you asked. If you take anything out of this post, make it these few things.

  1. Being unsupported is bad. Really bad. You don't want to be unsupported for a lot of reasons including no more security patches.
  2. Windows Server 2012 R2 has a ton of new features that make it a great OS to migrate to.
  3. Microsoft has a lot of resources available to help you upgrade.
  4. Getting stakeholder permission for an upgrade is as much about selling the benefits of moving to a new system as much as it is about the disadvantages of staying on the old one.

Good luck and happy migrating!

Read More

Quick Tip - Find All The Mail Enabled Groups A User Is A Member Of

Here’s a one-liner that will help you find all the mail enabled groups that a user is a member of. A little pre-requisite reading is this bit on group types to understand the difference between a security group and a distribution group: https://technet.microsoft.com/en-us/library/cc781446%28WS.10%29.aspx?f=255&MSPPError=-2147217396

Here’s the one-liner!

(get-aduser ThmsRynr -properties memberof).memberof | % { get-adgroup $_ } | ? { $_.GroupCategory -eq "Distribution" } | ft Name

It might not be the epitome of efficiency but it works and served me well when I needed it to.

First, we’re running a Get-ADUser command on our interesting user and making sure to retrieve the MemberOf property in addition to the standard properties returned. Out of all of the returned properties, it turns out that MemberOf is the only one I’m interested in so I select only that property by wrapping the command in brackets and appending .MemberOf. Second, I’m piping all of the groups that the user is a member of into a foreach-object loop. For each of the objects returned, I’m performing a Get-ADGroup. I have to do this because I can’t necessarily tell which groups the user is a member of are mail enabled just from their name, I have to run the Get-ADGroup command to get more information. I’m piping these results into a where-object command where I select only the groups whose GroupCategory is equal to “Distribution” (see the pre-requisite reading above). Then I format the group names into a table.

I could have got every group in my Active Directory and searched for groups that contained my user as a member and were Distribution types, but in my situation, it was faster to only spot check the groups that the user was actually a member of. I have a lot of groups, you might not.

Read More

New Stuff - Get-Clipboard And Set-Clipboard - New In PowerShell 5.0

Predictably, there are lots of new cmdlets coming in PowerShell/Windows Management Framework 5.0. Two of them that just came out in build 10105 are the Get-Clipboard and Set-Clipboard cmdlets. The help docs aren’t all written at the time I’m writing this post but I wanted to introduce them and highlight a couple neat use cases I immediately thought of.

[caption id=”attachment_211” align=”alignnone” width=”1122”]New Get-Clipboard and Set-Clipboard cmdlets New Get-Clipboard and Set-Clipboard cmdlets (click for larger)[/caption]

Back in the old days of PowerShell 4.0, you had to pipe output to clip.exe or use the PowerShell Community Extensions to interact with your clipboard. Not anymore!

Looking at the Get-Clipboard syntax, it’s quickly apparent that you can do more than just get the clipboard’s text content but let’s start with that anyway. So, what if I go and select some text, right click and copy it. What can I do with the Get-Clipboard cmdlet?

PS C:\&gt; Get-Clipboard
I copied this text to my clipboard.

Not exactly mind blowing. Similarly, you can use the Set-Clipboard cmdlet to put text on the clipboard.

PS C:\&gt; "This text was put on the clipboard using new cmdlets." | Set-Clipboard

PS C:\&gt; Get-Clipboard
This text was put on the clipboard using new cmdlets.

I’m probably not blowing your mind with this one either. Where this gets fun is when you consider the possibilities the using the -Format parameter. I can put more than just text on my clipboard, right? Let’s see what I get when I copy three files in my c:\temp directory to my clipboard. If I try to just use Get-Clipboard without any additional parameters or info like I did in the above examples, I won’t get anything returned, but what I can do is this.

PS C:\&gt; Get-Clipboard -Format FileDropList


    Directory: C:\temp


Mode                LastWriteTime         Length Name                                                                                                                 
----                -------------         ------ ----                                                                                                                 
-a----         5/1/2015   8:02 AM             30 file1.txt                                                                                                            
-a----         5/1/2015   8:02 AM             18 file2.txt                                                                                                            
-a----         5/1/2015   8:02 AM             11 file3.txt

Now we’re doing cool things. And what kind of objects are these?

PS C:\&gt; (Get-Clipboard -Format FileDropList)[0].GetType()

IsPublic IsSerial Name                                     BaseType                                                                                                   
-------- -------- ----                                     --------                                                                                                   
True     True     FileInfo                                 System.IO.FileSystemInfo

FileInfo! We can do all the same things with this array of files that we would do to the results of a Get-ChildItem command. This means we can go the other way too and use the Set-Clipboard cmdlet to put a bunch of files onto the clipboard.

PS C:\&gt; Get-ChildItem c:\temp\file*.txt  | Set-Clipboard

PS C:\&gt; Get-Clipboard -Format FileDropList


    Directory: C:\temp


Mode                LastWriteTime         Length Name                                                                                                                 
----                -------------         ------ ----                                                                                                                 
-a----         5/1/2015   8:02 AM             30 file1.txt                                                                                                            
-a----         5/1/2015   8:02 AM             18 file2.txt                                                                                                            
-a----         5/1/2015   8:02 AM             11 file3.txt

Note with all of the above examples, you can use the -Append parameter to simply add on to whatever is already on the clipboard.

I won’t cover the other formats (Image and Audio) or the text format types because you need something to discover for yourself. The last thing I’ll point out is that you can easily clear the clipboard, too.

PS C:\&gt; $null | Set-Clipboard

PS C:\&gt; Get-Clipboard

PS C:\&gt;

I’m not going to cover every new cmdlet that comes out with PowerShell 5.0 but this one is very accessible and I think I’ll be able to use it all over the place.

Read More

Quick Tip - Search Remote Computer Certificate Store

It’s really easy to search your local certificate store using PowerShell. You simply run a command like this.

dir Cert:\LocalMachine -rec | ? { $_.Subject -match "Interesting" }

The above command will recursively look through all the certs in the local machine store and return the ones that have the word “Interesting” in the subject. Not exactly re-inventing the wheel here.

There’s not a ton of great options for snooping through the certificate store of remote computers, though. The solution I chose to get around this is dead simple. I used the Invoke-Command cmdlet to scan the certificate store of a remote computer. It’s so easy that it almost feels like cheating.

Invoke-Command -ScriptBlock { dir Cert:\LocalMachine -rec | ? { $_.Subject -match "Interesting" } } -ComputerName ThmsRynr.mydomain.tld

 

Read More

Invitation - MVP Virtual Conference

This is a canned post provided by the Microsoft MVP program. I’m sharing it because I think it’s going to be a valuable event that readers of this blog could get a lot out of. I’m definitely going to be there and I’m really looking forward to it. Take a look and see if it’s something you’re interested in.


 

MVP15_MicrosoftMVP_VC_WebBanner_920x400px

Register to attend the Microsoft MVP Virtual Conference

I wanted to let you know about a great free event that Microsoft and the MVPs are putting on, May 14th & 15th.  Join Microsoft MVPs from the Americas’ region as they share their knowledge and real-world expertise during a free event, the MVP Virtual Conference.

The MVP Virtual Conference will showcase 95 sessions of content for IT Pros, Developers and Consumer experts designed to help you navigate life in a mobile-first, cloud-first world.  Microsoft’s Corporate Vice President of Developer Platform, Steve Guggenheimer, will be on hand to deliver the opening Key Note Address.

Why attend MVP V-Conf?  The conference will have 5 tracks, IT Pro English, Dev English, Consumer English, Portuguese mixed sessions & Spanish mixed sessions, there is something for everyone!  Learn from the best and brightest MVPs in the tech world today and develop some great skills!

Be sure to register quickly to hold your spot and tell your friends & colleagues.

The conference will be widely covered on social media, you can join the conversation by following @MVPAward and using the hashtag #MVPvConf.

Register now and feel the power of community!

MVP15_MicrosoftMVP_VC_WebTile_RegisterNow_160x160px

Read More

Quick Tip - Protect Your Active Directory From Finger Slips

Do you ever worry about giving Domain Admin or other Active Directory privileges to people? I do, so I decided to protect some sensitive items in my AD from accidental deletion - or as I like to call it, protecting against finger slips.

[caption id=”attachment_164” align=”aligncenter” width=”403”]3-16-2015 10-47-03 AM We’re talking about this flag.[/caption]

I’ve got some OUs that have user and group objects that I would really miss if they were to be accidentally deleted. Furthermore, I would really miss any entire OU if it were to be deleted. I’m not interested in protecting individual computer accounts or user/group accounts in non-sensitive OUs.

Here’s the script I used:

$arrOUs = @("Sensitive OU1","Sensitive OU2")
$arrOUs | % { Get-ADObject -SearchBase "OU=$($_),DC=sub,DC=domain,DC=tld" -filter {(ObjectClass -eq "group")} | Set-ADObject -ProtectedFromAccidentalDeletion:$true }
$arrOUs | % { Get-ADObject -SearchBase  "OU=$($_),DC=sub,DC=domain,DC=tld" -filter {(ObjectClass -eq "user")} | Set-ADObject -ProtectedFromAccidentalDeletion:$true }
Get-ADOrganizationalUnit -filter * | Set-ADObject -ProtectedFromAccidentalDeletion:$true

Line 1 defines an array of names of my sensitive OUs. Lines 2 and 3 are basically the same: they get all the AD objects in the sensitive OUs with an ObjectClass of group or user and protect them from accidental deletion. Why do this in two lines? I was getting inconsistent results (computer and other objects were returned) when I tried combining the filter. My AD isn’t that big so this works just fine for me. Line 4 protects all my OUs in my AD from accidental deletion.

Read More

Quick Tip - String Manipulation - First Name Last Name to Last Name, First Name

I’ve got kind of a silly post this week. I often get a list of names in the format…

John Doe

Jane Doe

Mike Smith

Mary Smith

… that I actually need to be in the format…

Doe, John; Doe, Jane; Smith, Mike; Smith, Mary

… and sometimes, especially with long lists of names, it’s a pain to do the manipulation in Notepad or Word. So what do you think I did? That’s right, I wrote a PowerShell script to handle it for me. I just throw the list of people into a text file and call up this script.

$rawnames = get-content C:\path\to\names.txt
$csnames = ""
$rawnames | % { $csnames += "$($_.tostring().split(' ')[1]), $($_.tostring().split(' ')[0]); "}
$csnames | clip.exe
This isn’t the tidiest script but I break it up into a couple extra parts so it’s easier to edit on the fly. I might comment out the “ clip.exe “ part of the last line if I don’t want the output on my clipboard.

The first line just gets the content of the text file and the second line initializes the variable $csnames (which stands for [semi]colon separated names). On the third line, I go through every value in the text file and put the part after the first space (the last name), a comma and space, and then the part before the first space (the first name) into the $csnames string. I throw a semicolon on and move to the next one.

This won’t do well with names like “John van Doe” that have multiple spaces. It just happens to suit my needs and might serve as a super simple example to some of you who are trying to wrap your heads around manipulating strings in PowerShell.

Read More

Imported PowerShell Sessions ErrorActionPreference Gotcha

I just bumped into something silly that I know I’ll forget about in the future. Using the function in my PowerShell profile to open an Exchange Management shell, I ran the following command as part of a script.

try { Get-Recipient doesntexist } catch [Exception]{ write-line "No such mailbox" }

It’s a pretty self-explanatory command. I was trying to detect if a mailbox, in this case “doesntexist”, existed or not. Typically if the mailbox doesn’t exist, the Get-Recipient cmdlet will throw an error. My goal was to catch the error and do something productive with it but the above command doesn’t trigger the Catch block.

No problem, I thought to myself. My ErrorActionPreference is set to Continue by default so I’ll tweak it for this command.

try { Get-Recipient doesntexist -erroraction stop } catch [Exception]{ write-line "No such mailbox" }

The -ErrorAction Stop part should make the script stop executing on an error and hop into the Catch block. Wrong! The above command throws an error without triggering the Catch block, too.

It turns out I had to edit my $ErrorActionPreference variable to be Stop. Just using the flag in the command doesn’t work. I’ve run into this in other scripts where I import a PSSession, too. Now my command looks something like this.

Try { $OldErrorActionPref = $ErrorActionPreference; $ErrorActionPreference = "Stop"; Get-Recipient doesntexist } catch [Exception]{ write-host "No such mailbox" } $ErrorActionPreference = $OldErrorActionPref

First, I’m getting the current value of $ErrorActionPreference and storing it. Then I set the ErrorActionPreference to Stop. I run my Get-Recipient command which fails and now instead of getting an error, my Catch block is triggered. Afterwards, I set $ErrorActionPreference back to it’s previous value.

Now, because I’ve written a blog post about this, I’ll never forget again.

Read More

Quick Tip - Use PowerShell To Detect If A Location Is A Directory Or A Symlink

In PowerShell, symbolic links (symlinks) appear pretty transparently when you’re simply navigating the file system. If you’re doing other work, though, like changing ACLs, bumping into symlinks can be a pain. Here’s how to tell if a directory in question is a symlink or not.

Consider the following commands.

PS C:\Users\ThmsRynr&gt; ((get-item c:\symlink).Attributes.ToString())
Directory, ReparsePoint

PS C:\Users\ThmsRynr&gt; ((get-item c:\normaldir).Attributes.ToString())
Directory

Here, we’re just running a Get-Item command on two locations, getting the Attributes property and converting to a string. The first item is a symlink and includes “ReparsePoint” in its attributes. The second item is a normal directory and does not include “ReparsePoint”.

So that means we can do something as easy as this.

PS C:\Users\ThmsRynr&gt; ((get-item c:\symlink).Attributes.ToString() -match "ReparsePoint")
True

PS C:\Users\ThmsRynr&gt; ((get-item c:\normaldir).Attributes.ToString() -match "ReparsePoint")
False

Easy. If the above values have “ReparsePoint” in them, we know they are a symlink and not just a regular directory. In my case, my script to apply ACLs to a group of directories avoided symlinks with ease.

Read More

Bypassing PowerShell Execution Policy

Let me be absolutely clear about this post. I do not in any way encourage or support people who wish to use the below information to circumvent the controls put in place by companies and administrators. This post is strictly for academic purposes and for the sake of sharing information.

PowerShell Execution Policies control whether or not a system may run a PowerShell script based on whether the script is signed or not. See the about_Execution_Policies Technet page for more information if you are unfamiliar with execution policies or how to apply them. Execution policies do not, however, limit a user or service from running commands in a PowerShell shell (PowerShell.exe).

So what if you have an unsigned script you want to run but your execution policy is preventing it? Well, there’s a way to bypass the execution policy. And it’s run from a PowerShell shell.

Administrative users can easily bypass the execution policy with this command.

PowerShell.exe -noprofile -executionpolicy bypass -file "\\path\to\file.ps1"

But what about limited users? Well there’s something for them, too.

Powershell.exe -NoProfile -Command {.([scriptblock]::create((Get-Content "\\path\to\script.ps1" | out-string)))}

That’s right, just one line. No registry hacking, no weird developer program strangeness, just a command that allows a user or service to subvert the execution policy of the machine.

Let’s break down the command. We’re launching PowerShell.exe, not exactly a puzzler. We want it with no profile and we’re telling it to run a command. The trick is that the command we’re running is effectively going to be the script that our execution policy would otherwise block.

The dot is basically an alias for “execute” and in this case, we’re telling it to execute what’s in the proceeding round brackets. The round brackets contain instructions to create a new ScriptBlock out of the contents of the .ps1 file that the execution policy would otherwise prevent from running.

I think it’s clear that this is not really something that Microsoft intends for you to do. Use (or not) wisely at your own discretion.

Read More

Find All Certificates Issued Of A Specific Template

As part of another PowerShell script I’m writing, I needed to get an array of all of the certificates issued in my Enterprise PKI environment by a specific Issuing Certificate Authority (CA) that are of a certain Certificate Template.That doesn’t sound like such a tall order. You can launch MMC.exe, add the Certification Authority module, browse the issued certificates and see for yourself the different issued certs and their template.

PowerShell is a bit trickier, though, for a couple reasons. First, you’re going to need a PowerShell module to help you with this task. I really like PSPKI (available on CodePlex). Install that module and run the command to import the module.

import-module -Name pspki

The next tricky thing to keep in mind is that your “CertificateTemplate” attribute on each issued cert doesn’t always present itself like you think it should. That’s pretty ambiguous so I’ll explain more.

In the Certificate Authority MMC, most of the certificates you issue should have a value in the Certificate Template column along the lines of Template Name (OID for the template) where the part in brackets is the unique object identifier (OID) for the template. In the MMC, this information is presented pretty consistently. This isn’t really the case for PowerShell.

The following command will get you a list of all the Certificate Templates that have been used to issue certs on your CA as the Certificate Templates are presented in PowerShell.

Get-CertificationAuthority -computername ca-name.fqdn.tld | Get-IssuedRequest -property CertificateTemplate | select-object -property CertificateTemplate -unique

The first thing we need to do is get the CA since the Get-IssuedRequest cmdlet works with a CA. We get the issued requests (the certificates that have been issued from the CA) while making sure to include the CertificateTemplate property. Then we just select the unique Certificate Templates. Mine returns a mixed list of OIDs and more traditional names - not Name (OID) like we saw in the MMC. Keep this in mind as we continue.

Back on track, where's my list of certs with a specific template?

With the above information in mind, we’re better armed to get a list of all certs issued by our CA with a specific template. We really only have two steps: 1. Find out how the Certificate Template we’re concerned with is represented in PowerShell and 2. Actually get the list of certs with that template.

Task 1 isn’t so hard. First, go into the Certification Authority MMC and find a cert with the template you are concerned with. Look at the Issued Common Name column and take note of the value in that column. Then in PowerShell, run this command.

Get-CertificationAuthority -computername ca-name.fqdn.tld | Get-IssuedRequest -filter "CommonName -eq TheValueYouSawInMMC" -property CertificateTemplate

Like above, we’re getting the CA we’re concerned with and getting the issued requests. This time, though, we’re not looking to return every cert issued, just the one(s) where the Common Name is the same as the value you saw in the MMC. We also need to make sure to include the CertificateTemplate property because it’s not returned by default. Use -property * to get every property back and take a detailed look at a certificate. There are some neat things you can do.

This will get you back a bit of interesting information about the certificate you identified in the MMC as being of the correct template. Specifically, you can see what the value is under the CertificateTemplate property. Maybe it’s a friendly name, maybe it’s an OID. Either way, that’s the value that PowerShell is using to identify that particular template.

Use the above value for the CertificateTemplate in this command.

Get-CertificationAuthority -computername ca-name.fqdn.tld | Get-IssuedRequest -property CertificateTemplate | ? { $_.CertificateTemplate -eq "The Certificate Template Value From Above Command" }

We’re getting the CA, getting all the issued certs (including those certs’ CertificateTemplate property) and filtering where the CertificateTemplate property is equal to the one we found in the last command we ran.

There you go! Export that to a CSV, assign it to a variable. The rest is up to you.

Read More

Quick Tip - List All SMA Schedules That Repeat

I use a few PowerShell scripts that end up triggering Service Management Automation (SMA) runbooks. Each time you want to use PowerShell to do that, you end up creating a one-time use SMA schedule. These one-time schedules are eventually cleaned up by SMA but they can clutter your view pretty well if you have a lot of them.

Luckily, there’s an easy way to use PowerShell to list SMA schedules that aren’t one-time use. We just want a list of all the SMA schedules that are repeating. You need the SMA PowerShell tools for this.

Get-SmaSchedule -WebServiceEndpoint https://SMA-Management-Server | ? { $_.NextRun } | ft

We’re going to get all the SMA schedules on our SMA implementation and get the ones where there is a NextRun value. The question mark is an alias for the Where-Object command and so we’re looking for schedules where $_.NextRun is true (has a value, isn’t null). I like formatting the output as a table for easier reading.

If a schedule has a NextRun attribute, it’s safe to say that it’s going to run sometime in the future and is not a one-time use schedule that’s already done it’s job.

Read More

Quick Tip - Get-Random Is Weird - Doesn't Include The Maximum Value

The PowerShell command Get-Random is kind of weird. Consider the following script:

while ($true)
{
    Get-Random -Minimum 1 -Maximum 2
    sleep 1
}

Run it on your own computer. Every second, it should write a random number between 1 and 2 until you interrupt it (CTRL + C). You would expect a somewhat balanced output of 1’s and 2’s like if you were recording the outcomes of repeatedly flipping a coin. Right? Wrong. You will get a string of 1’s and never ever EVER get a 2. Change the Maximum to 3 and you will get 1’s and 2’s but no 3’s.

Apparently the maximum value of the Get-Random command isn’t a valid value to return, but, the minimum is. It’s possible that there is a condition where the command will work as expected but I haven’t experimented enough to know for sure.

Weird.

Read More

Tricky PowerShell Pipeline Tricks - Playing With WMI

Here’s a quick task: Get the WMI object win32_bios for a computer. Using PowerShell, that’s really easy. You just run Get-WMIObject win32_bios. Now what if you wanted all the extended properties of the object (not just the five that it normally returns) and ONLY to return the properties that actually have a value assigned?

Well this question just got trickier. win32_bios isn’t very big so it’s an easy one to play with in this example. Let’s step through a couple commands so we know what we’re dealing with.

Get-WMIObject win32_bios


SMBIOSBIOSVersion : 6.00
Manufacturer      : Phoenix Technologies LTD
Name              : PhoenixBIOS 4.0 Release 6.0     
SerialNumber      : VMware-42 00 a6 a6 02 95 ec 5e-89 05 0a cd b1 3b aa c6
Version           : INTEL  - 6040000

Well okay, there are the five properties that we knew the command returns by default. We know the extended properties are in there and we can prove it.

Get-WMIObject win32_bios | Select-Object -Property *


PSComputerName        : workingsysadmin
Status                : OK
Name                  : PhoenixBIOS 4.0 Release 6.0     
Caption               : PhoenixBIOS 4.0 Release 6.0     
SMBIOSPresent         : True
__GENUS               : 2
__CLASS               : Win32_BIOS
__SUPERCLASS          : CIM_BIOSElement
__DYNASTY             : CIM_ManagedSystemElement
__RELPATH             : Win32_BIOS.Name="PhoenixBIOS 4.0 Release 6.0     ",SoftwareElementID="PhoenixBIOS 4.0 Release 6.0     ",SoftwareElementState=3,TargetOperatingSystem=0,Version="INTEL  - 6040000"
__PROPERTY_COUNT      : 27
__DERIVATION          : {CIM_BIOSElement, CIM_SoftwareElement, CIM_LogicalElement, CIM_ManagedSystemElement}
__SERVER              : workingsysadmin
__NAMESPACE           : root\cimv2
__PATH                : \\workingsysadmin\root\cimv2:Win32_BIOS.Name="PhoenixBIOS 4.0 Release 6.0     ",SoftwareElementID="PhoenixBIOS 4.0 Release 6.0     ",SoftwareElementState=3,TargetOperatingSystem=0,Version="INTEL  - 6040000"
BiosCharacteristics   : {4, 7, 8, 9...}
BIOSVersion           : {INTEL  - 6040000, PhoenixBIOS 4.0 Release 6.0     }
BuildNumber           : 
CodeSet               : 
CurrentLanguage       : 
Description           : PhoenixBIOS 4.0 Release 6.0     
IdentificationCode    : 
InstallableLanguages  : 
InstallDate           : 
LanguageEdition       : 
ListOfLanguages       : 
Manufacturer          : Phoenix Technologies LTD
OtherTargetOS         : 
PrimaryBIOS           : True
ReleaseDate           : 20110921000000.000000+000
SerialNumber          : VMware-42 00 a6 a6 02 95 ec 5e-89 05 0a cd b1 3b aa c6
SMBIOSBIOSVersion     : 6.00
SMBIOSMajorVersion    : 2
SMBIOSMinorVersion    : 4
SoftwareElementID     : PhoenixBIOS 4.0 Release 6.0     
SoftwareElementState  : 3
TargetOperatingSystem : 0
Version               : INTEL  - 6040000
Scope                 : System.Management.ManagementScope
Path                  : \\workingsysadmin\root\cimv2:Win32_BIOS.Name="PhoenixBIOS 4.0 Release 6.0     ",SoftwareElementID="PhoenixBIOS 4.0 Release 6.0     ",SoftwareElementState=3,TargetOperatingSystem=0,Version="INTEL  - 6040000"
Options               : System.Management.ObjectGetOptions
ClassPath             : \\workingsysadmin\root\cimv2:Win32_BIOS
Properties            : {BiosCharacteristics, BIOSVersion, BuildNumber, Caption...}
SystemProperties      : {__GENUS, __CLASS, __SUPERCLASS, __DYNASTY...}
Qualifiers            : {dynamic, Locale, provider, UUID}
Site                  : 
Container             :

There’s everything! That’s a lot of blanks, though. Depending on the type of reporting you’re doing, that might not be so nice to look at. I know that I’d like to get them out. Luckily with PowerShell 4.0, it’s really easy if you use the -PipelineVariable parameter.

Get-WmiObject win32_bios -PipelineVariable bios | 
  foreach {
   $props = $_.psobject.properties.name | Where-Object {$bios.$_}
   $bios | select $props
  }

I’ve left out the output since it’s every property that has a value and none of the ones that have a blank. Let’s take a look at what’s actually happening here.

On Line 1, we’re running the same command we ran the last two times except we assigned a Pipeline Variable which will be named $bios (you omit the $ when assigning the name of the variable). We enter a foreach loop on Line 2.

On Line 3, we’re setting the value of $props and on Line 4, we’re writing out the part of the WMI object that contains it. The tricky thing here is how we get the value of $props. Look at how we use the Pipeline Variable and the .PSObject.Properties.Name property to identify the items with a value.

Well that's great but what if I don't have PowerShell 4.0 or found that example really confusing?

Don’t worry, this is pretty easy to do in earlier versions of PowerShell, too.

gwmi win32_bios | %{$wmi = $_}
Select-Object -inputobject $wmi -property ($wmi.Properties.Name | Where-Object -FilterScript {$wmi.item($_)})

I started using some more shortcuts (gmi and % instead of Get-WMIObject and foreach).

In Line 1, I’m doing the same ol’ Get-WMIObject command that I did before and in down the pipe, I’m assigning the value to $wmi. I could have also done “$wmi = gmi win32_bios” but this strategy has benefits if you’re planning on scaling this script out.

In Line 2, I’m doing some tricky Select-Object work. The input for the command is $wmi, which isn’t so tricky, but the property that we’re selecting is pretty tricky. -Property takes an array, so we can put a command in there as long as it returns an array.

$wmi has a property of .Properties.Name which is an array of all the names of the properties attached to the WMI object that got returned in Line 1 (that we are using as our input). We don’t just want all the properties, though, so we need to select only the ones with a value (not null). We do that by piping the list of all the properties into a Where-Object command.

The Where-Object command has a tricky property called -FilterScript which basically acts as an implicit IF statement. If you wrote Where-Object -FilterScript {$true} then you would return every object in the pipe. If you wrote Where-Object -FilterScript {($num = $((Get-Random -Minimum 1 -Maximum 100) % 2) -eq 1)} you would get random properties because sometimes the statement will be true and sometimes the statement will be false. These are all silly items to filter on, though.

In my script, I’m filtering on if the current property the script is looking at has a value in $wmi. If the property is 0 or null and therefore doesn’t exist, $wmi.item($_) will return false and that line won’t be returned. It’s basically a test to see if there’s a string or not. Consider this example:

$var1 = [string]$null
$var2 = "something"

if ($var1) { write-host "Var1 has a value" } else { write-host "Var1 has no value" }
if ($var2) { write-host "Var2 has a value" } else { write-host "Var1 has no value" }

#Will return
#Var1 has no value
#Var2 has a value

Because $var1 doesn’t have an actual value assigned to it, an if ($var1) will return false. That’s the same logic we are using all throughout the above code.

Tricky, right?

Read More

Quick Tip - When was an Exchange Online Protection Transport Rule Changed?

What if you have an Exchange Online Protection (EOP) transport rule that isn’t behaving the way you thought it should? I’ve been the victim of some strange inconsistencies with EOP since they tried to migrate us from Forefront Online Protection for Exchange (FOPE) in March (actually summer) of last year.

So did a transport rule get changed administratively by some cowboy admin colleague? Or is EOP conspiring against you? In EOP’s GUI, you can’t tell when a transport rule was changed last but you can if you make a remote connection to EOP using PowerShell.

You just need to know the name which you can find by running a Get-TransportRule command and looking for the one that you’re interested in. Then run this…

(Get-TransportRule | ? { $_.Name -eq 'The Name Of Your Rule' }).WhenChanged

… which will give you the date and time that the rule was last changed.

Read More

Quick Tip - Opening An Exchange Online Protection Shell

There’s lots of big, exciting, non-blogable things happening at work this week so here’s a very quick tip.

Last week I wrote a post on a PowerShell function I threw in my profile to connect quickly to Exchange. That’s great, but what if you also want to manage Exchange Online Protection (EOP) from a PoweShell console? Well it turns out to be pretty easy.

$cred = Get-Credential
$s = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $cred -Authentication Basic -AllowRedirection
import-pssession $s

This looks a lot like the function I showed you last week except it’s connecting to Office 365 and you need to use your Live ID instead of AD credentials.

Read More

Opening A Remote Exchange Management Shell

Here’s a function I stuck in my PowerShell profile. I found myself making lots of remote connections to my Exchange 2013 environment so I put together a quick function to create the connection for me. It’s far from perfect but it saves me time every single time I use it so check it out.

function gimme-exchange ()
{
    $arrExchangeServersURI = @("http://fqdn-of-server-one/Powershell","http://fqdn-of-server-two/Powershell")
    $success = $false
    $UserCredential = Get-Credential
    ForEach ($connectionURI in $arrExchangeServersURI)
    {
        Try
        {
            If($success -ne $true)
            {
                $getMBXconn = new-pssession -configurationname Microsoft.Exchange -connectionuri $connectionURI -Authentication Kerberos -Credential $UserCredential
                $null = Import-PSSession -AllowClobber $getMBXConn
                $success = $true
            }
        }
        catch [Exception]
        {
            $strError = $_.Exception.Message
            $strError = "Error: ${strError}"
            $success = $false
        }
    }
}

On Line 1, we’re declaring the function - no big deal. I’m naming mine “gimme-exchange” so once my profile loads, I can just type that to start the function.

On Lines 3 to 5 I’m setting a few variables. Line 3 is weird. I made an array of the different Exchange servers I had rather than going through some autodiscovery process. The script will try to open a connection to the first one, if it fails, try the second one, and so on. It’s inefficient but I don’t add and remove a lot of Exchange servers so I can get away with it since this function is just for me. Line 4 is going to be used to detect if we made a connection or not. Line 5 will prompt and store the administrative credentials that you will use to create the connection.

On Line 6, we start looping through all the servers I specified in Line 3. In a Try/Catch block, if we haven’t already made a successful connection to a previous Exchange server, we’re going to make a new connection and import it. You have to make sure you use the -configurationname item because we’re not just creating any old PSSession, Exchange is funny and we connect to it using some special parameters shown in Line 12. When we import the session on Line 13, we’re going to allow clobbering of other existing cmdlets and suppress the output of the import command.

If we run into an error anywhere in the Try block, the Catch block is setup to echo out the error message and continue looping through servers depending on how severe the error is.

That’s it! Yay, saving time.


Update: If you’re already logged in as the user you want to connect to Exchange as, you can skip the credential gathering part and run this instead.

$arrExchangeServersURI = @("http://fqdn-of-server-one/Powershell","http://fqdn-of-server-two/Powershell")
$success = $false
ForEach ($connectionURI in $arrExchangeServersURI)
{
    Try
    {
        If($success -ne $true)
        {
            $getMBXconn = new-pssession -configurationname Microsoft.Exchange -connectionuri $connectionURI -Authentication Kerberos
            $null = Import-PSSession -AllowClobber $getMBXConn | Out-null
            $success = $true
        }
    }
    catch [Exception]
    {
        $strError = $_.Exception.Message
        $strError = "Error: ${strError}"
        $success = $false
    }
}

 

Read More

Renewing Exchange 2013 Certificates SHA-256 Style

I recently ran into an issue that I think is actually pretty funny. It was time to renew the publicly trusted certificate that we install on our Exchange 2013 servers that gets tied to SMTP, OWA and some other IIS services like autodiscover. Since SHA-1 is on the road to deprecation, our cert vendor pushed pretty hard to get something with a hashing algorithm of SHA-2 (or SHA-256, it’s the same thing). Sounds reasonable, right?

Well, here’s the problem. Even though Microsoft is one of many vendors who is pushing the deprecation of SHA-1, Exchange 2013 doesn’t seem to have a mechanism built into it that generates a SHA-2 cert request. Even the New-ExchangeCertificate PowerShell command doesn’t have a way to change which hashing algorithm is used. Windows 2008 R2 and later support SHA-2 but at the time of writing this article, Exchange 2013 doesn’t have a way to generate such a request.

There are other ways to generate cert requests, though. Since Windows Server 2012 R2 supports SHA-2, and even SHA-2 certificates, perhaps - I thought - the Certificates MMC can be used to generate such a cert. I was right, it can be used to generate a SHA-2 cert. “Great!” I can hear you exclaim, “Now give me the steps on how to do it!” Not so fast. I’m not going to put the full steps to generating a SHA-2 certificate using the Windows Certificates MMC here because of a problem.

To generate a SHA-2 request using the Certificates MMC, you add the Certificates for Local Computer snapin to MMC.exe, right click on the Personal certificate store and generate a new request. When asked to choose a request template, you are offered two choices: Legacy or CNG. Legacy doesn’t support changing your hashing algorithm and therefore only generates SHA-1 requests. CNG it is, then! I continued on, generated my SHA-2 cert request, got it approved and took the certificate from my provider and went to test it. Almost everything worked except I couldn’t log into OWA or ECP. Why not? Because Exchange 2013 stores lots of info in encrypted cookies about you when you log into these services and it can’t use a CNG certificate to decrypt the data. Whenever I logged in, I would be immediately redirected back to the login page as if nothing had happened because the encrypted cookie with all my info (like “you logged in as username”) couldn’t be decrypted since Exchange 2013 can’t use the CNG provider in Windows 2012 R2.

How else can you generate a SHA-2 certificate request, then? The Windows MMC requires you to associate it with a CNG provider that Exchange 2013 can’t use and Exchange 2013 doesn’t allow you to create a SHA-2 request? This isn’t looking good, SHA-1 is being deprecated and I have to renew my certificate!

The answer turns out to be unbelievably easy. I couldn’t believe this worked.

Here’s the answer. Here’s how to renew your Exchange 2013 public certificate with a SHA-2 hashing algorithm…

  1. Use the New-ExchangeCertificate PowerShell command to generate a cert request that is perfect for your needs, minus the fact that it will request a SHA-1 hashing algorithm.
  2. Submit the request to your public certificate provider but indicate that it is a SHA-2 certificate request. Your provider should have an option to indicate what sort of certificate your request is set up for. Make sure you say SHA-2.
  3. Your provider will give you back a SHA-2 certificate that is not associated with a CNG provider that will work with your Exchange 2013 environment for all SMTP, IIS, IMAP and POP services you wish to bind it to.

That’s right, the answer is to generate a SHA-1 request anyway and tell your provider that it’s SHA-2.

Credit to Microsoft Premier Support for figuring this out.

Read More

SMA Runbook Daily Report On SMA Runbook Failures

The sad reality of using Service Management Automation is that it can be a little iffy in the stability department. That being so, I decided to put together an SMA runbook that would report on all the other SMA runbook failures of the last 24 hours. Yes, I realize the irony in using SMA to report on its own runbook failures. One must have faith in one’s infrastructure and this particular runbook.

First things first, I need to declare my runbook/workflow and get the stored variable asset which holds my SMTP server.

workflow GetDailyFailedJobs
{
    $smtpserver = Get-AutomationVariable -Name 'SMTPServer'
}

Easy! Now the SMA PowerShell cmdlets work best in actual PowerShell, not in workflows so I’m going to cheat and use an inlinescript block to hold pretty much everything else. Now before we get to the good stuff, I’m going to knock out the easy task of setting up my try and catch blocks as well as my function that sends email.

workflow GetDailyFailedJobs
{
    $smtpserver = Get-AutomationVariable -Name 'SMTPServer'
    InlineScript
    {
        Try
        {
            $smtpServer = $using:smtpServer
            $smtpSubject = "SMA Job Failure List"
            $body = "Better put something in here!"
            $emailaddress = @("thmsrynr@outlook.com","someone@else.com")
            send-mailmessage -to $emailaddress -From "SMA_MGMT@else.com" -Subject $smtpSubject -SMTPServer $smtpserver -body $body -bodyashtml
        }
        Catch [Exception]
        {
            $smtpServer = Get-AutomationVariable -Name 'ABC-SMTPServer'
            $smtpSubject = "FAILURE: GetFailedJobs failed"
            $body = "GetFailedJobs failed"
            $emailaddress = @("thmsrynr@outlook.com","someone@else.com")
            send-mailmessage -to $emailaddress -From "SMA_MGMT@else.com" -Subject $smtpSubject -SMTPServer $smtpserver -body $body -bodyashtml
        }
    }
}

Most of this is pretty straight forward. I’m going to put some stuff in a try block and email it out if it works. If I catch an error, I’m going to email a notification that something screwed up in the try block.

Now the meat and potatoes. Something actually worth making a blog entry for! We need to build the content of the email we’re sending in the try block. Right now it’s just text that says “better put something in here” and it’s right. We better.

workflow GetDailyFailedJobs
{
    $smtpserver = Get-AutomationVariable -Name 'SMTPServer'
    InlineScript
    {
        Try
        {
            $arrFailedJobs = Get-SMAJob -Webserviceendpoint "https://sma-mgmt-server.else.com" | Where-Object -Property JobException | Where-object -Property endtime -gt ((get-Date).AddMinutes(-1440)).ToUniversalTime()
            If($arrFailedJobs)
            {
                $strHtmlMail = '&lt;html&gt;&lt;body&gt;&lt;table style="padding: 6px; border: 1px solid #000000; border-collapse: collapse;"&gt;&lt;tr style="background-color: #6388FF;  color: #FFFFFF;"&gt;&lt;td&gt;RunbookName&lt;/td&gt;&lt;td&gt;Start Time&lt;/td&gt;&lt;td&gt;End Time&lt;/td&gt;&lt;td&gt;Error&lt;/td&gt;&lt;/tr&gt;' 
                ForEach($objFailedJob in $arrFailedJobs)
                {
                    $strFailedJobStart = $objFailedJob.StartTime
                    $strFailedJobEnd =$objFailedJob.EndTime
                    $objFailedRunbook = Get-SmaRunbook -WebServiceEndPoint "https://sma-mgmt-01.abcp.ab.bluecross.ca" -ID $objFailedJob.RunbookId
                    $strFailedRunbook = $objFailedRunbook.RunbookName
                    $strFailedJobError = $objFailedJob.JobException
                    $strHtmlMail += '&lt;tr style="background-color: #B27272;  color: #FFFFFF;"&gt;&lt;td style="border:1px solid #000000"&gt;' + ${strFailedRunbook} + '&lt;/td&gt;&lt;td style="border:1px solid #000000"&gt;' + ${strFailedJobStart} + '&lt;/td&gt;&lt;td style="border:1px solid #000000"&gt;' + ${strFailedJobEnd} + '&lt;/td&gt;&lt;td style="border:1px solid #000000"&gt;' + ${strFailedJobError} + '&lt;/td&gt;&lt;/tr&gt;'
                }
                $strHtmlMail += "&lt;/table&gt;&lt;/body&gt;&lt;/html&gt;"
                
                $smtpServer = $using:smtpServer
                $smtpSubject = "SMA Job Failure List"
                $body = $strHtmlMail
                $emailaddress = @("thmsrynr@outlook.com","someone@else.com")
                send-mailmessage -to $emailaddress -From "SMA_MGMT@else.com" -Subject $smtpSubject -SMTPServer $smtpserver -body $body -bodyashtml
            }
        }
        Catch [Exception]
        {
            $smtpServer = Get-AutomationVariable -Name 'ABC-SMTPServer'
            $smtpSubject = "FAILURE: GetFailedJobs failed"
            $body = "GetFailedJobs failed"
            $emailaddress = @("thmsrynr@outlook.com","someone@else.com")
            send-mailmessage -to $emailaddress -From "SMA_MGMT@else.com" -Subject $smtpSubject -SMTPServer $smtpserver -body $body -bodyashtml
        }
    }
}

Wow that got a little ugly really quickly. What you need to keep in mind is that a lot of this ugliness is styling to make the email report pretty. That’s a little counter-intuitive but, hey, welcome to scripting as a working sysadmin. Let’s break it down line by line.

Line 8 is getting an array of failed jobs in the last day. It’s a big pipeline which:

  • Gets the jobs
  • Where there's a JobException property on the job
  • Where the endtime is greater than within the last day

In Line 9 t0 20, if there are jobs in the array of failed jobs within the last day, we have to build a report. Line 11 initializes the HTML that will become the body of our report by putting in a table, its column headers and styling it.

Starting on Line 12, for each failed job in the array of failed jobs, we’re adding a row to our HTML table with the start time, end time, the runbook ID, the runbook name and the exception that was thrown.

On lines 19 to 21 we finish off our HTML for the body of our email. Then we use the code we already wrote to send it to us.

Boom. Pretty nice report on failed jobs. Hopefully you never see one in your inbox, otherwise you’re going to have some troubleshooting to do.

Read More

Quick Tip - Run An SMA Runbook At A Specific Date/Time

Happy New Year’s Eve! Here’s a quick tip just before New Year’s.

I recently answered a question on Technet about scheduling SMA runbooks. It’s no secret that the scheduling engine in Service Management Automation leaves something to be desired. Here’s how I like to use PowerShell to get specific about when an SMA runbook is going to be triggered.

You’ll need the SMA PowerShell tools installed and imported for this to work.

$dateWhen = [DateTime]"&lt;put a date and time in here or otherwise calculate one&gt;"
$strSchedName = "some_prefix_$($strWhen)"
$schedRun = set-smaschedule -name $strSchedName -webserviceendpoint "https://your-endpoint" -scheduletype onetimeschedule -starttime $dateWhen -expirytime $dateWhen.AddHours(3) -description $env:username
$strReturn = start-smarunbook -name "your-runbook" -WebServiceEndpoint "https://your-endpoint" -schedulename $strSchedName -parameters @{ var1 = "var1"; var2 = "var2" }

Line 1 is easy, it’s just a variable for a datetime object and it’s going to represent the time you want to trigger the runbook. Line 2 is a variable for what the name of the SMA schedule asset will be. I like to add something dynamic here to avoid naming collisions.

Now the interesting parts. On Line 3, we’re creating an SMA schedule asset using set-smaschedule. It’s going to be named our Line 2 variable, it’s going to be a onetimeschedule (instead of recurring), start at our start time (Line 1) and expire three hours after the start time. On Line 4, I’m triggering the runbook with start-smarunbook and specifying the schedule we created on Line 3. I’m also passing parameters in a hash table.

You’re done! The only hiccup with this I’ve seen is if one of your parameters for your runbook is a hashtable. Matthew at sysjam.wordpress.com covered this weird situation in a blog post very recently.

Read More

Quick Tip - Get All SMA Runbook Schedules That Will Run Between Now And Then

I wanted to do some maintenance on my SMA runbook servers but couldn’t remember which jobs were going to run in the next 12 hours (if any). Luckily there’s a quick way of getting that information! This work assumes that you have the SMA tools installed and that you ran the below command or have it as part of your profile.

import-module Microsoft.SystemCenter.ServiceManagementAutomation

Behold!

get-smaschedule -WebServiceEndpoint "https://your-server" | ? { $_.NextRun -gt (get-date).date -and $_.NextRun -lt (get-date).addhours(12)}

This isn’t a very crazy command. “your-server” is the server where you have the SMA management items installed, not an individual runbook server.

You’re getting all the SMA schedules from your SMA instance and filtering for items whose next run is after “now” and before “now plus 12 hours”. You can change the get-date related items easily to suit your needs. For instance, what ran last night? What will run tomorrow? What ran on October 31?

 

Read More

SMA Runbooks And UTC Time

I don’t know about you but I hate dealing with systems that use UTC time. I have SMA runbooks that work with Exchange 2013, Exchange Online Protection and other services that annoyingly return results in UTC instead of my local timezone. I wrote an SMA runbook that can be called from other SMA runbooks to do the conversion for me.

workflow ConvertUTCtoLocal
{
    param(
       [parameter(Mandatory=$true)]
       [String] $UTCTime
    )
    $strCurrentTimeZone = (Get-WmiObject win32_timezone).StandardName
    $TZ = [System.TimeZoneInfo]::FindSystemTimeZoneById($strCurrentTimeZone)
    $LocalTime = [System.TimeZoneInfo]::ConvertTimeFromUtc($UTCTime, $TZ)
    Return $LocalTime
}

It’s pretty simple runbook! It has one mandatory parameter $UTCTime which, as the name would suggest, is the UTC time that you want to convert to your local time.

Line 7 gets the local timezone by performing a WMI query. Line 8 uses the [SystemTimeZoneInfo]::FindSystemTimeZoneByID to convert the value returned from the WMI query into a timezone. Line 9 performs the actual conversion from whatever the UTC time is to the timezone determined in line 8.

This whole thing assumes that the time and timezone are set correctly on your SMA runbook servers.

Read More

Print Everything In A Folder To A Specific Printer

For one reason or another, I found myself in a situation this week where I needed to print all the contents of a directory on an hourly basis. Not only did I need to print the contents, I needed the jobs to go to a specific printer, too.

SMA runbooks to the rescue! I wrote my solution in PowerShell and stuck it in an inlinescript block in my runbook that I invoked on a print server.

First, I needed to get everything in the directory and print it. I originally looked at using Out-Printer but I have images, PDFs, all kinds of non-plaintext files. I needed another solution and it was this:

get-childitem "\\nas\directory" | % { Start-Process -FilePath $_.VersionInfo.FileName –Verb Print -PassThru }

Foreach file in this directory, we’re starting a process on the file that prints it. It will effectively open the file in whatever the default application is, render it and print it to your default printer. Great! Except what if I don’t want to print to the default printer? The Start-Process cmdlet doesn’t seem to lend itself to that very well. As usual, I had to cheat.

$defprinter = (Get-WmiObject -ComputerName . -Class Win32_Printer -Filter "Default=True").Name
$null = (Get-WmiObject -ComputerName . -Class Win32_Printer -Filter "Name='My Desired Printer'").SetDefaultPrinter()
get-childitem "\\nas\directory" | % { Start-Process -FilePath $_.VersionInfo.FileName –Verb Print -PassThru }
$null = (Get-WmiObject -ComputerName . -Class Win32_Printer -Filter "Name='$defprinter'").SetDefaultPrinter()

Since we’re printing to the default printer, why don’t we just change the default? Well, because maybe the default printer (that we don’t want to print to) is default for a reason. So let’s change the default printer and change it back after.

Line 1 gets the name of the default printer. Line 2 sets the default printer to My Desired Printer which is presumably the name of a valid printer on the server. Line 4 sets the default back to whatever the original default was and we already know what line 3 does. Obviously, this is a solution that works in my specific environment that can tolerate a brief interruption to which printer is default.

The rest was easy. I setup a new SMA runbook, invoked the above script on my print server (in an inlinescript block) and scheduled it to run hourly.

Read More

Open File Dialog Box In PowerShell

Here’s a neat little PowerShell function you can throw into your scripts. Lots of times I want to specify a CSV or TXT or some other file in a script. It’s easy to do this:

$inputfile = read-host "Enter the path of the file"
$inputdata = get-content $inputfile

But that means you have to type the whole absolute or relative path to the file. What a pain. I know what you’re thinking… There must be a better way!

There is! Use an open file dialog box. You know, like when you click File, Open and a window opens and you navigate your filesystem and select a file using a GUI. How do you do it in PowerShell? Let me show you. First things first: let’s declare a function with a couple of the items we’re going to need.

Function Get-FileName($initialDirectory)
{
    [System.Reflection.Assembly]::LoadWithPartialName("System.windows.forms") | Out-Null
}

I’m going to name this function Get-FileName because I like the Verb-Noun naming scheme that PowerShell follows. It’s got a parameter, too. $initialDirectory is the directory that our dialog box is going to display when we first launch it. The part of this that most likely looks new is line 3. We need to load a .NET item so we can use the Windows Forms controls. We’re loading via partial name because we want all the Windows Form controls, not just some. It’s faster and easier to do this than it is to pick and choose. We’re piping the output to Out-Null because we don’t want all the verbose feedback it gives when it works.

Now let’s open the thing and get to business selecting a file.

Function Get-FileName($initialDirectory)
{
    [System.Reflection.Assembly]::LoadWithPartialName("System.windows.forms") | Out-Null
    
    $OpenFileDialog = New-Object System.Windows.Forms.OpenFileDialog
    $OpenFileDialog.initialDirectory = $initialDirectory
    $OpenFileDialog.filter = "CSV (*.csv)| *.csv"
    $OpenFileDialog.ShowDialog() | Out-Null
}

On line 5, we’re creating a new object. That object is unsurprisingly an OpenFileDialog object. On line 6 we’re specifying that initial directory that we got in the parameter. On line 7 we’re doing something a little interesting. The filter attribute of the OpenFileDialog object controls which files we see as we’re browsing. That’s this part of the box.

I’m limiting my files to CSV only. The first part of the value is CSV (.csv) which is what the dialog box shows in the menu. The second part after the pipe character *.csv is the actual filter. You could make any kind of filter you want. For instance, if you wanted to only see files that started with “SecretTomFile”, you could have a filter like SecretTomFile.

The next item on line 8 is to open the dialog box, we do that with the ShowDialog() function. We discard the output from this command because it’s spammy in this context, just like when we added the .NET items.

One last thing! We’ve created, defined and opened our OpenFileDialog box but don’t we actually need to get the result of what file was selected? Yes, we do. That’s pretty easy, though.

Function Get-FileName($initialDirectory)
{
    [System.Reflection.Assembly]::LoadWithPartialName("System.windows.forms") | Out-Null
    
    $OpenFileDialog = New-Object System.Windows.Forms.OpenFileDialog
    $OpenFileDialog.initialDirectory = $initialDirectory
    $OpenFileDialog.filter = "CSV (*.csv)| *.csv"
    $OpenFileDialog.ShowDialog() | Out-Null
    $OpenFileDialog.filename
}

The Filename attribute is set when someone commits to opening a file in the OpenFileDialog box. On line 9, we’re returning it to whatever called our script.

So to use this function in the same way as the example at the top of this post, your code would look like this.

$inputfile = Get-FileName "C:\temp"
$inputdata = get-content $inputfile

I think this is a lot nicer than typing a filename every time you want to run a script. I find it particularly convenient on scripts I run a lot.

Read More

Cheating To Fix Access Is Denied Error Using Get-WMIObject

I was doing a little work that involved using PowerShell to get a list of printers from several remote print servers. I figured this would be a great job for WMI and I was right. The command I used, looked like this.

$printserver = "printserver1.domain.tld"
Get-WMIObject -class Win32_Printer -computer $printserver | Select Name,DriverName,PortName | Export-CSV -path 'C:\temp\$printserver.csv'

I had a list of print servers that I imported into an array and looped through them but this is the important part of the code. I am simply using WMI to get some information about the logical printer objects on a given print server and exporting them to a CSV.

How boring! This isn’t a very old blog but we usually talk about more complicated things than that. Well things got weird on one print server that we’ll simply call PrintServer2. PrintServer2 threw an error instead of working nicely.

Get-WMIObject : Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))
At line:1 char:1
+ Get-WMIObject -class Win32_Printer -computer PrintServer2 | Select Name,DriverName,Por ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Get-WmiObject], UnauthorizedAccessException
    + FullyQualifiedErrorId : System.UnauthorizedAccessException,Microsoft.PowerShell.Commands.GetWmiObjectCommand

Not cool. I have Domain Admin rights… it’s a domain joined server… what do you mean access is denied? Running the command locally on the server worked, I just couldn’t do it remotely. There’s plenty of literature on trying to fix this error already but I was in a hurry so I tried the next thing that came to mind: cheat a bit and run the command locally on the server… remotely.

$printserver = "printserver2.domain.tld"
invoke-command -computer $printserver -scriptblock { Get-WMIObject -class Win32_Printer -computer localhost | Select Name,DriverName,PortName } | Export-CSV -path 'C:\temp\$printserver.csv'

I didn’t do anything ground breaking, I just used invoke-command to run the command on the server instead of running the command on my local machine (to retrieve remote information).

Hah! I beat you, stupid Windows Server 2003 box that has been around since I was in junior high school and needs to be decommissioned! I got your printer information from you without having to fix any of your weird problems!

The moral of the story is that sometimes, you can cheat a little bit to accomplish your goal and avoid doing a whole bunch of terrible patches, regedits, etc. to your infrastructure.

Read More

Report On Expiring Certs From A Powered Down Certificate Authority

Let’s hypothetically say I have an old Windows Server 2003 Intermediate Certificate Authority. Let’s also hypothetically say that I already replaced my antiquated Windows Server 2003 PKI infrastructure with a Windows Server 2012 PKI infrastructure and I am only keeping the 2003 stuff around so it can publish a CRL and to run a monthly script that tells me which certs are going to expire within 60 days. It’s good to know which certs will expire within 60 days so you can remember to renew them or confirm that they don’t need renewal.

Perhaps I decide to shut down the 2003 CA so it quits taking up resources and power but I keep it around in case I need to power it back on and revoke a certificate. How do I keep getting those monthly reports about which certs will expire soon? Note: I’m not addressing the CRL publishing concerns or certificate revocation procedure in this post. We’re only talking about the expiring soon notification issue.

Service Management Automation to the rescue! We’re going to set up an SMA runbook that is going to send us monthly emails about which certs from this 2003 CA are going to expire within 60 days. How the heck are we going to do that when the server is powered off? Well, we’re going to cheat.

If we’re going to power this CA down, we’re going to need to get the information on its issued certificates from somewhere else. A CSV file would be a nice, convenient way to do this. As it would turn out, generating such a CSV file is really easy.

Before you power off the old CA, log into it and open the Certificate Authority MMC. Expand the Certification Authority (server name) tree, and the tree for the name of the CA. You should see Revoked Certificates, Issued Certificates, Pending Requests, Failed Requests and maybe Certificate Templates if you’ve got an Enterprise PKI solution. Right click on Issued Requests and click Export List. Switch the Save As Type to CSV and put the file somewhere that you can see it from within Service Management Automation like a network share. Note: Once I get my CSV out of the CA MMC, I manually removed the white space out of the column headings to make it nicer for me to read.

[caption id=”attachment_42” align=”aligncenter” width=”300”] Export your list of Issued Certificates to a CSV file.[/caption]

Now the fun part. Time to put together an SMA runbook that will go through this CSV and email me a list of all the certs that are going to expire within 60 days of the current date. Sounds scary right? Well it turns out that it isn’t so bad.

Let’s start simply by initializing our PowerShell Workflow. You get to this point in SMA by creating a new runbook. I’m also going to set up a whopping one variable.

workflow GetExpiringCerts
{
    $strInPath = Get-AutomationVariable -Name 'INFOLDER'
}

Our workflow/runbook is called GetExpiringCerts. The variable $strInPath is the location to where I have all the files I ever load into SMA. That is, it’s just a path to a network share that’s the same in all my runbooks that use it.

So far so good? Good. Next we need to look through the CSV that’s somewhere beneath $strInPath for all our certs. Let’s break down that big block of code:

workflow GetExpiringCerts
{
    $strInPath = Get-AutomationVariable -Name 'INFOLDER'
    $strCerts = inlinescript { 
        $csvCerts = import-csv "$using:strInPath\GETEXPIRINGCERTS\IssuingCA.csv"
        $arrCertsExpireAfterToday = $csvCerts | ? { (get-date -date $_.CertificateExpirationDate) -ge (get-date) } | ? { (get-date -date $_.CertificateExpirationDate) -le (get-date).addmonths(2) }
        $strResults = "Certificates expiring on NAME OF YOUR CA"
        $arrCertsExpireAfterToday | % { $strResults += "&lt;br&gt;&lt;br&gt;"; $strResults += "&lt;b&gt;Issued Common Name:&lt;/b&gt; " + $_.IssuedCommonName; $strResults += "&lt;br&gt;&lt;b&gt;Expires:&lt;/b&gt; " + $_.CertificateExpirationDate; $strResults += " &lt;b&gt;Requested By:&lt;/b&gt; " + $_.RequesterName }
        $strResults
    }
}

There’s some cheating right there. PowerShell Workflows are different than regular vanilla PowerShell in ways that I don’t always like. To make a PowerShell Workflow (which is what SMA runbooks use) execute some code like regular PowerShell, you need to wrap it in an inlinescript block. We’re going to take the output of the inline script and assign it to $strCerts.

Let’s break down what’s inside that inlinescript block. First we’re going to import the CSV full of our Issued Certificates from the CA. To use a variable defined outside the inlinescript, you prefix it with “using:”, hence $using:strInPath. $strInPath is defined outside the inlinescript block but I want to use it inside the inlinescript block.

Now to build an array of all the certs we care about. The variable $arrCertsExpireAfterToday is going to hold a selection of the CSV we loaded into $csvCerts. We take $csvCerts and pipe it into a few filters. The first: where the Certificate Expiration Date is greater than today’s date. That way we don’t look at any certs that are already expired. The second: where the Certificate Expiration Date is less than two months from now. That way we don’t see certs that expire in two years that we don’t care about yet. That’s it! That’s the array of certs. Now all we need to do is make the output look nice and send it.

On line 7, we start building the body of the email we’re going to send and assigning the future body of our email to $strResults. I want to send an HTML email because it’s prettier. My email will start with a line that tells us what’s coming next. Then, we need to get the information out of $arrCertsExpireAfterToday and format it nicely so it may be sent. We’re going to pipe the contents of $arrCertsExpireAfterToday through a foreach-object function that will make some nice HTML output containing the Issued Common Name, the Certificate Expiratino Date and the Requester Name. You can format your report differently, take different headings, etc., but this is what worked for me.

I print $strResults on line 9, as the last thing I do in the inlinescript block so that $strResults becomes the value returned by the inlinescript block and therefore the value of $strCerts (line 4).

We’re almost out of the woods. All we have to do now is send the email.

workflow GetExpiringCerts
{
    $strInPath = Get-AutomationVariable -Name 'INFOLDER'
    $strCerts = inlinescript { 
        $csvCerts = import-csv "$using:strInPath\GETEXPIRINGCERTS\IssuingCA.csv"
        $arrCertsExpireAfterToday = $csvCerts | ? { (get-date -date $_.CertificateExpirationDate) -ge (get-date) } | ? { (get-date -date $_.CertificateExpirationDate) -le (get-date).addmonths(2) }
        $strResults = "Certificates expiring on NAME OF YOUR CA"
        $arrCertsExpireAfterToday | % { $strResults += "&lt;br&gt;&lt;br&gt;"; $strResults += "&lt;b&gt;Issued Common Name:&lt;/b&gt; " + $_.IssuedCommonName; $strResults += "&lt;br&gt;&lt;b&gt;Expires:&lt;/b&gt; " + $_.CertificateExpirationDate; $strResults += " &lt;b&gt;Requested By:&lt;/b&gt; " + $_.RequesterName }
        $strResults
    }
     
    $strSubject = "The following certificates expire on Cora in less than two months"
    $strEmail = @("ThmsRynr@outlook.com","other@people.com")
    $strSMTPServer = Get-AutomationVariable -Name 'SMTPServer'
    send-mailmessage -to $strEmail -From "service_account@yourdomain.com" -Subject $strSubject  -SMTPServer $strSMTPServer -body $strCerts -bodyashtml
}

Easy. We need a subject, a list of people to send the email to, and an SMTP server. My email To list is an array and I store my SMTP server in an SMA asset. Then I use the send-mailmessage cmdlet to shoot this email off. Make sure to use the -bodyashtml flag so the HTML is parsed correctly instead of being included as plaintext.

That’s it! Set an SMA schedule to run monthly and you’ll get yourself monthly email notifications of certificates that are due to expire within 60 days even though the CA that issued them is powered off!

Read More

Which Exchange Mailbox Database Was A Certain User's Mailbox In On A Specific Day?

In Exchange, user mailboxes are stored in databases. You regularly back up these databases, don’t you? Good.

Now imagine the following. User A has a mailbox in Database01. This database is backed up daily. Now imagine User A’s mailbox was moved to another database, Database02. What if User A came to you and needed something recovered? Okay no problem, load up the backup for Database02 and you can recover anything for that user since the user has been on Database02. Wait, what do you mean you want something from BEFORE you were on your current database, User A? How am I supposed to know what database backup I need to mount to find your stuff? Exchange only knows what database you’re on, not what database you came from! Your data is on a backup for who knows which database!

My solution for this is a bit bulky. It involves automating a script to export a list of all users and the database they’re on. The idea is, if you export this list daily, you will have an archive of what database all your users are on for any given day. Even if they move, you can reference the output from this script and see which database their mailbox was on during the day in question. Then you will know which database’s backup you need to load to help User A get his stuff back.

For automation, I am using Service Management Automation (SMA). I love SMA. It uses PowerShell Workflows which are kinda, sorta, almost like regular ol’ PowerShell with some differences. I’ll point out the parts of my solution that aren’t vanilla PowerShell.

First things first, I need to declare my workflow and stick a Try Catch block in it:

workflow ExchangeMailboxDBList
{
    try
    {

    }
    catch [Exception]
    {
        $emailaddress = "ThmsRynr@outlook.com"
        $smtpSubject = "Exchange List Failure"
        $smtpserver = Get-AutomationVariable -Name 'SMTPServer'
        $body = "Could not connect to my server"
        send-mailmessage -to $emailaddress -From "your_service_account@domain.com" -Subject $smtpSubject -SMTPServer $smtpserver -body $body
    }
}

We’re already running into something funny. What is Get-AutomationVariable -Name ‘SMTPServer’? In SMA, you can store variables for any of your runbooks to use. This cmdlet is retrieving the previously stored value for my SMTP server. This is nice because if I change SMTP servers, I can update the single SMA asset instead of updating all my scripts individually.

Great! Now let’s actually write the part of the script that does something useful. Let’s start by initializing some variables:

workflow ExchangeMailboxDBList
{
    try
    {
       $mailboxes = inlinescript{ 
        $connectionURI = "http://fqdn.to.your.server/Powershell"
        $getMBXconn = new-pssession -configurationname Microsoft.Exchange -connectionuri $connectionURI -authentication kerberos
       }
    }
    catch [Exception]
    {
        $emailaddress = "ThmsRynr@outlook.com"
        $smtpSubject = "Exchange List Failure"
        $smtpserver = Get-AutomationVariable -Name 'SMTPServer'
        $body = "Could not connect to my server"
        send-mailmessage -to $emailaddress -From "your_service_account@domain.com" -Subject $smtpSubject -SMTPServer $smtpserver -body $body
    }
}

A couple new lines in the Try block! $connectionURI is pretty straight forward. You’re going to make a remote session to the Exchange Management Shell on your Exchange Server so your script needs to know where that is. $getMBXconn is a new PSSession to the URI you specified. Notice that I’m not passing any credentials specifically to this one. The service account that’s running this SMA runbook has the rights it needs to do this. You can either do the same or you can pass specific credentials from an SMA asset or stored elsewhere.

What is it all wrapped in, though? Some inlinescript block? Like I mentioned above, PowerShell Workflows are different than PowerShell. Some of the commands coming up don’t work in PowerShell Workflows but do work in PowerShell. By wrapping the PowerShell script in an inlinescript block, it executes like real PowerShell instead of a PowerShell Workflow. The outcome of this inlinescript is going to get assigned to $mailboxes.

We have our connection, now we need to actually go get some data:

workflow ExchangeMailboxDBList
{
    try
    {
       $mailboxes = inlinescript{ 
        $connectionURI = "http://fqdn.to.your.server/Powershell"
        $getMBXconn = new-pssession -configurationname Microsoft.Exchange -connectionuri $connectionURI -authentication kerberos

    try
    {
        $mbx = invoke-command {
         get-mailbox -resultsize unlimited -erroraction silentlycontinue | Select-Object -property database, name, ServerName, Guid 
         } -session $getMBXconn
         $mbx
     }

     catch [Exception]
     {
     $emailaddress = "ThmsRynr@outlook.com"
     $smtpSubject = "Exchange List Failure"
     $smtpserver = Get-AutomationVariable -Name 'SMTPServer'
     $body = "Could not invoke command on server"
     send-mailmessage -to $emailaddress -From "your_service_account@domain.com" -Subject $smtpSubject -SMTPServer $smtpserver -body $body
     }

     remove-pssession $getMBXconn
    }

    $sOut = Get-AutomationVariable -Name 'ABC-OUTFOLDER'
    $mailboxes | Sort-object -property database, name | Export-Csv -Path "$sOut\ExchangeBackups\MBX_Tracking$(get-date -f yyyy-MM-dd_HH_mm).csv" -NoTypeInformation
    }
    catch [Exception]
    {
        $emailaddress = "ThmsRynr@outlook.com"
        $smtpSubject = "Exchange List Failure"
        $smtpserver = Get-AutomationVariable -Name 'SMTPServer'
        $body = "Could not connect to my server"
        send-mailmessage -to $emailaddress -From "your_service_account@domain.com" -Subject $smtpSubject -SMTPServer $smtpserver -body $body
    }
}

Alright that looks like a lot of stuff at once, but it wasn’t actually too crazy. Let’s break down what we added.

Lines 17 - 24 are just another Catch block to tell us if we messed up in lines 11 - 13.

On lines 11 - 13, we’re invoking a command in our remote session. We’re running a get-mailbox, specifying that we want all of them and that we don’t want to stop on an error. We pipe that into a select-object command to filter out only data we want, the database, name of the mailbox, the server the database is on and the guid. On line 13, we write that data (which in turn is assigned to $mailboxes since $mailboxes is the value of whatever comes out of the inlinescript).

On line 26, we get rid of our PSSession.

On lines 29 and 30 we’re writing our findings to a csv file and naming it in a way that includes the current date and time. That’s how we know what script output is for which day.

That’s our final solution! All you need to do now is create the SMA schedule to run this as frequently as it makes sense for you. You might also want to include some logic to clean up old files. Then you can just go to the output folder, find the file that corresponds with the day you care about and find the database that User A was on the day they deleted the wrong file.

Read More

First Post

Everybody knows that the first post on a blog isn’t supposed to have any real content or be super helpful. Let’s just get it out of the way, then.

You may be interested to know about a couple articles I wrote for SysJAM that would fit in well here:

  1. Using PowerShell to find out who has access to a directory
  2. Troubleshooting an issue with calling a SCORCH runbook from SMA

I guess I could also plug the About/Contact page in case you somehow missed the big link at the top of every page.

 

Read More