There are 2 users currently logged on a server via remote desktop protocol (RDP) and want to asked them to log out while I perform maintenance. There is a utility in Windows called MSG . It is a utility to send a message to a user that’s currently logged onto a system.
To send a message to a remote server, from the terminal:
$> msg * /server:SERVER1 /time:30 /v "Could either USER1 or USER4 logout until 1PM? I need access to prepare for the migration next week"
* : I’m sending the message to all sessions
/server: The name of the server I’m sending the message to.
/time:30 : I’m giving the users 30 seconds to acknowledge the message. If no time is listed, the message will stay on the screen until the users click OK.
I wrote a script to get the mac address off of several servers, so it required my username and password. I don’t like to put my password in a script, but the credentials pop-up box for me to enter my password got a bit cumbersome. I decided to put it an a separate file for 2 reasons, 1) so my password is not saved in a script (I delete from the file once I’m done) and 2) making scripts modular is what it’s all about.
I passed the exam Friday (10/4) and it was nothing if not one of the most stressful things that could happen to a person. Here is a quick review of my test preparation as well as what happened the day of the exam.
Test Prep
There is a bit of a backstory. I took the cloud practitioner exam in June and since so much of that material is relevant to the solutions architect exam, it’s safe to say, my studies started in June. I signed up for an Intro to Cloud Computing online course with The-ITEM and had taken the exam into my third week. I stayed in the course because my husband wanted to learn about cloud computing and I could be there for moral support.
The materials I used to study with and pass the CSAA are as follows:
AWS Certified Solutions Architect Study Guide: Associate (SAA-C01) Exam book $
AWS Certified Solutions Architect Practice Tests: Associate SAA-C01 Exam book $
AWS Certified Solutions Architect-Associate Certification Guide book $
I watched the A Cloud Guru course from beginning to end, twice. Once along with The-ITEM and once with my study buddy, more on that later. Since the Linux Academy content was in the process of being refreshed, I only reviewed certain modules in that course. As I got closer to the test date, the course was done and I watched a few sections through from beginning to end. Linux Academy has a very thorough way of going through the services. It will do you well to watch this course from beginning to end. The depth and breadth of the 44 hour course is staggering.
The books were just as helpful as the course. I like to read physical books. Being able to mark up, highlight and doodle in the margins was a plus for me. I’d print out the chapter quizzes or a few pages of questions and practice answering them, over and over. The exams in the A Cloud Guru course were good as were the practice exams provided by the cert guide.
Reading the FAQs the day before and the day of the exam really helped me get a few extra points. I would highly recommend not skipping this step. The courses aren’t up to date, nor are they comprehensive. Getting the information straight from AWS is always a sound choice.
My AWS Study Buddy
After having taken a over a month off from study prep, I knew I needed to get back on the ball. The power of Twitter found me someone to study with and keep me on track to take the exam. I tweeted that I was looking for an AWS study buddy and some how, my tweet found its way to a young lady in Portugal.
We met online twice a week for 2-3 hours each time. She’s 5 hours ahead, so she was talking to me in the middle of the night. We read and watched the courses in advance and would discuss the content, walk through the console and do the quizzes and exam questions together. I scheduled my exam just to put it on my calendar and have a date to work towards. She scheduled her exam for the same day.
As the exam date neared, I fell behind. My dog died and I was devastated. I didn’t feel ready and was about to reschedule the exam. My husband told me not to. He said, “only two things can happen, you will pass or you will fail and know just what to expect on the exam”. So, I pressed on.
The Exam
My study buddy took her test before me and she passed. She messaged me to tell me that I won’t have any trouble passing it either. She would know. We’d spend hours together going over content and questions and had so many discussions about all things AWS CSAA. She used almost every minute they gave her and suggested I do the same. I knew I would take her advice.
“Only two things can happen, you will pass or you will fail and know just what to expect on the exam”
Darryl Andrews
Whatever could go wrong at the test center did go wrong. The computer wouldn’t log me in and someone had to assist. After that little hiccup, I was off and running. with 130 minutes to complete 65 questions. I went through the exam and flagged 18 questions, then at the end, revisited them. Next, I went through each question again, from beginning to end. I told myself that I’d end the test with a minute left. As that time approached, I felt very good about the exam and went to end the test.
NOTHING!
The system was still counting down, but I wasn’t able to end the test. I didn’t know what to do. I got up and ran out of the room and screamed, “MY TEST WON’T END!!!” I had 30 seconds left and I had no idea what would happen. When I got back to the test, it had timed out, the proctor tried to get my test to end, screen to move or anything. Nothing happened. She walked out and I followed her. Heart racing and feeling a tear in my eye, she reassured me that the test would save and nothing would be lost. She had to call someone and he walked her through the process and she was able to log me in and end the exam for me. I sat back down, completed the survey and got the notification I was waiting for. I PASSED!
I took to Twitter to scream it from the rooftop that I passed.
…and that my study buddy passed too!
A load was lifted. All that work paid off. Next up, the AWS Certified Sysops Administrator Associate Exam.
While installing Installing LAPSx64.msi, I’d get this error message when installing via double-clicking.
When I’d install it from the command-line (elevated), I’d get a different error.
> msiexec /a "LAPSx64.msi"
Launch gpedit.msc and browse to Local Computer Policy > Computer Configuration > Administrative Template > Windows Components > Windows Installer. Select the “Turn off Windows Installer” setting and click edit policy setting. Enable the policy and in the options pane, click the down arrow under “Disable Windows Installer” and select “never”. Click apply.
Security is everyone’s job. There, I said it. Now that I got that under my belt, I’ll tell you how the first (Amazon Web Services) AWS re:Inforce conference went.
The Senior Information Security Architect at my job wasn’t able to attend the conference and asked me to go in his place. With the focus being on security, this wasn’t something I would have picked for myself. Alas, my manager said I could go if I came back and shared what I’d learned. I’m so glad I did.
Day 1
Dejavu all over again. I was just here at the Boston Convention Center a few weeks ago for Red Hat Summit which means I’d have a greater chance finding my sessions. They had shuttles to and from hotels which was great, but upon entering the convention center, there were metal detectors and bag checks. I’ve never been to a conference where they had metal detectors and went through your stuff. It felt like I was at the airport, except I didn’t have to take my shoes off. You had to empty your pockets and if you had keys or any metal, you had to walk through with it in your hands and your hands over your head (like don’t shoot). Of course, the metal detector goes off as I walk through. The guard wands me and stops on my pocket. He’s starts getting louder and louder asking me what’s in my pocket over and over again. I said, “nothing” and he asks again, so I just lifted shirt up and patted my pocket and said, “nothing!!!”. He lets out this little laugh and says, “oh, it’s your jeans.” How many people do you think walked through there with grommets on their jeans? DO BETTER re:Inforce organizers.
Off to breakfast. There is nothing good to report here. On to my review of the keynote.
Keynote
Tuesday started with the keynote, lead by AWS VP and CISO, Steve Schmidt. His talk started off separating AWS from the other cloud vendors by way of the revenue generated and the number of ‘regions’ competitors have verses the number of regions AWS has. With 21 regions and 66 availability zones, the way AWS constructs regions, seems to far surpass that of the next closest competitor.
There was a lot of emphasis on security of the cloud and security in the cloud, which is called, the shared responsibility model. Looking at the culture of security, (this is a security conference, right) it must be “built into what we do everyday”. Touting AWS products that will provide the type of granular security, monitoring and compliance businesses need now and in the future, he hoped we all walked away with 3-5 things to make you more secure.
Separated in to chapters, the talk covered the following topics:
Chapter 1: The Current State of Security
Chapter 2: Culture of Security
Chapter 3: Governance, Risk and Compliance
Chapter 4: Security Deep Dive
Chapter 5: The Future of Cloud Security
As he reviewed the current state of security, he hailed that fact that currently, 94% of all websites are using SSL, but on the other end of the spectrum, 94% of all IOT devices are sending information in plain text. AWS has service called IOT Defender, a fully managed service which gives you a way to patch and update devices and even more importantly, encrypting device data.
There is a service called AWS Ground Station, which is a fully managed service that lets you control satellite systems as well as ingest and process of of that data.
The most talked about suite of security services in this keynote was Security Hub (which just went GA), GuardDuty, Inspector and Macie. Together, they provide automated compliance checks of application and resources, uses machine learning to analyze and monitor account activity and networks, and classify and protect sensitive data. Although separate products, they seem to always be mentioned together.
He mentioned that “encryption is no silver bullet”, but it surely beats a blank, There is a new feature that customers have been waiting for is Elastic Block Storage (EBS) encryption by default. You can opt-in to have all newly created volumes encrypted at creation, with the ability to use customer managed keys or AWS default keys. Since keys are regional, you have to opt-in region by region. This, on top of layering defenses, AWS is putting security at every level.
There were many more services mentioned and reintroduced; Control Tower, Config Rules, IAM Access Advisor + Organizations, AppMesh, Nitro w/ Firecracker, Radar Framework, Root CA Hierarcy for ACM and so many more, I thought they were just making stuff up at this point.
How to Secure Your Active Directory Deployment on AWS
This is the session that I looked forward to the most. Since we are working towards deploying Active Directory (AD) to AWS, this was pretty timely. The presenter, an AWS employee, discussed the use cases for deploying AD to AWS, then gave an overview that covered 2 deployment types, self-manged AD and managed AD. Starting with an overview of the basics of AD, he used the shared responsibility model as the starting point to draw the distinction between the two solutions.
The managed AD solution is of course easier and less work to deploy. Creating a separate forest or domain and either a 1-way or 2-way trust in the beginning was biggest part of implementing that solution. The only thing the customer has to worry about after that are the users, group and group policy. We looked at that solution in the beginning, but for what the level of access we require in our domain, we opted for the self-manged AD, where we deploy a server and promote it to a domain controller (DC). This allows us to extend our on-prem out to AWS and work with our single sign on.
He discussed the of separation of responsibility by creating an account structure that separated the management of AD into separate accounts using AWS Landing Zone. Also, creating a separate organizational account that logged all accounts using CloudTrail and AWS Config logs as well as a security account that had the GuardDuty master in it.
This talk covered quite a bit of very relevant information for me. I’ll definitely be reviewing the slides and rewatching the session.
Securing Serverless and Container Services
This talk was on 2 technologies I’m not very familiar with; serverless and containers. He talked about common sense approaches to securing both technologies, using slides that covered multiple security domains and services as well as ‘cloud adoption framework’ from a security perspective. Slides & recording.
Security Best Practices and the Well-Architected Way
As a student of the Well-Architected Framework, this session gave me a great primer into how AWS provides services that upholds this pillar. With the Well-Architected tool, which is free to use, you can review your workloads and discover areas where you can improve technical decisions on how to secure your workload in AWS. I also found out about the labs on security as well as other pillars of the framwork. This look like a very good resource to play around with tools (outside of your production account, of course) and discover what’s available. Slides & recording.
Learn to Love The AWS Command Line Interface
This was a talk held in the expo center at the Developer’s Lounge by one of my favorites who teaches online AWS certification classes on Udemy and A Cloud Guru, Ryan Kroonenberg. I was so excited to see his tweet that he was doing a talk on the AWS CLI. Although the title was different, it was the same exact talk he did at AWS Public Sector Summit, but with a different name.
I wasn’t the slightest bit upset by it. At his talk at Summit, he mentioned he used Amazon Polly to help him study for exams. I took his advice and learned about Polly and did the exact same thing for my exam,which was a little over a week away. I typed my notes up and used the SSML markup and was able to download them all to MP3s. It was so rad to be able to study on the go.
Before the talk started, I’d asked could I get a selfie with him because he was swamped at the end of his talk at summit. Of course he obliged and his right hand, Faye Ellis volunteered to take the photo. There was NO WAY I was going to have her take the photo, I wanted her in it.
He went over 20 CLI commands and stipulated that this talk wasn’t aimed at gurus, just regular folks who want to learn about what’s possible in the CLI. He covered installing it on Mac and Windows as well as setting it up with your access keys (the very insecure way, but hey, that’s how we all learned). There were quite a few that I didn’t know about or forgotten about. I didn’t use Polly via the CLI, but this time I took a photo of the URL in the slides and I will definitely check it out.
Of course, I had a better grasp on some command the second time around. It was a great 30 minutes well spent and I got to thank them for the great content. There was no need to take notes, he put all the commands up in S3 for our CLI enjoyment.
Threat Detection on AWS: An Introduction to Amazon GuardDuty
Finally, a primer on GuardDuty. By this time, I’d heard so much about this product, it was high time a found out what it actually was. My colleague said we were already using it so now I was even more interested in seeing it for myself.
GuardDuty is a regional managed service that can aggregate logs across AWS accounts and analyze them for unexpected and/or malicious behavior happening into a record called a Finding. With no agent needed, it takes information from VPC Flow Logs, CloudTrail events and DNS logs and produces the findings. Rated high, medium and low, findings contain information about the resource in question and the behavior detected. You click on it for even more details about the issue. Details may include account id, the type of resource, the port, the number of times it’s been logged, as well as a link to learn more about the behavior.
GuardDuty gets their threat intel from CrowdStrike, ProofPoint and threat information gathered by AWS. With this much information, you can imagine the number of events being processed. This data is never logged, just streamed and processed in memory, unless the log entry contains a finding.
Once you get a feel for the type of behaviors that are occurring in your environment, you can set up automated remediation using Lambda, and CloudWatch events to take action on a finding. If someone adds or changes a rule to something insecure like port 22 on 0.0.0.0/0, you can create a Lambda function that will lock the port down to whatever you like.
I’m sure it will be a great tool in our AWS security arsenal. Slides & recording.
Day 2
How to act on your security and compliance alerts with Security Hub
This talk was aimed at getting customers to look at Security Hub (SH) as a way to address compliance. With two AWS employees and two SH customers, they started off with 4 problem statements that outlined issues that can be addressed by this product.
Backlog of Compliance requirements
Too many security alert formats
Too many security alerts
Lack of integrated view
SH offers a single view into your security and compliance tools. Using best practices suggested by the Center for Internet Security AWS security benchmarks, you’ll get a compliance score against their standards. It’s a bit like GuardDuty in that it will offer a single view for you to review, triage and take action on issues. It even works with GuardDuty as well as Macie and Inspector as they can send their findings into SH for review. You can also centralize accounts and it will give you insight into what types of issue it discovers across your organization.
Plenty of third-party integrations like CrowdStrike Falcon, Palo Alto: VM-Series and Splunk Enterprise to enable and gain the ability to consume their data. With provided CloudFormation templates, you can set up integration between them and SH. You can also send findings to partners like PagerDuty, Slack and Splunk for even quicker notifications.
Aligning to the NIST Cybersecurity Framework in the AWS Cloud
This talk was way over my pay grade, but I was able to glean some gems to bring back to my colleagues
I learned what NIST Cybersecurity Framework was what industries, organizations and even states that use it. They mentioned a whitepaper on it as well as a workbook that outlines the responsibilities.
I had to run in the middle the talk to grab a special swag item by request, but here are the slides and recording.
Securing your Block Storage on AWS
This talk was an overview of block storage in general as well as availability to opt-in for default encryption on new EBS volumes. It’s just a check box and from then on, all new volumes will be encrypted using a key you create or a default key. Although you’ll need to enable this on a region by region bases, you can forever be sure that volumes will be encrypted.
There was so much talk of KMS, I decided to make sure I dropped into the hands-on labs to see if I could get some time with it.
I hope the slides and recording can shed light on this. This session was PACKED. The walk-ups couldn’t even get in. *** Inside Hack*** Next time, walk in on an empty line, grab some headphones and sit in an empty seat in another section.
Hands On Labs
I passed on the last 2 sessions of the day to get some time in with hands on labs. When you entered the room, you were given a ticket with a code that gave you 1 free lab on qwiklabs. Once you were done with a lab, you could get another code and learn something else. I was able to knock out quite a few before they closed down. Here are the labs I completed.
Caching Static Files with Amazon Cloud Front
Introduction to Amazon EC2
Working with Amazon Elastic Block Store (EBS)
Working with Elastic Load Balancing (ELB)
Introduction to AWS Key Management
Introduction to AWS Identity and Access Management
The EC2 and IAM lab were elementary, but I’d never created an application load balancer before, so that was a pleasant surprise how straight forward it was to set up.
End of the conference
After an exhausting day and an AWS online study group to get to, I didn’t go to the closing reception. However, I was able to make my way to the expo floor and snag a few more t-shirts and a beer.
Overall, this was a really good conference. I learned a lot about services I’d never heard of and more about services that I use frequently. With all this information about what AWS has and how some services work together, I feel like I’m in a better position to investigate and dig around the console more and gain some nuggets for the Solution Architect exam.
I passed the exam today, so before I’m inundated with work stuff and AWS re:Inforce next week, I thought I’d write this up while it was fresh in my head.
I set my intentions this year to move forward learning more about AWS and getting a few certifications along the way. I started a new job in January that has some production but mostly test/dev workloads in AWS. Once I got my AWS credentials, I was off and running. I logged in and took a look around at what was running. I started trying and failing at a few things, but I learned a lot along the way.
Promising myself I’d get my AWS Certified SysOps Administrator Associates (SysOps) certification this year, so I set off in that direction. I’m going to admit, It’s better to have a little bit of experience in AWS before you dive into that exam. My boss suggested I try for the AWS Certified Solutions Architect Associates (SA) exam first, so I changed course. I discovered there was even an more entry-level certification, the AWS Certified Cloud Practitioner exam so decided to try that one first while studying for the SA.
To start my study, I purchased a book by Anthony Sequeira on May 17th and set to reading. I also started the Linux Academy course on the same topic. My SA study group started on June 3rd, held by this local group called The Item which stand for “The Inclusive Technology + Entrepreneurship Movement”. My husband decided to join the group as well, now we’re both studying to become SAs!!!
To study for the exam, we get on Zoom 3x a week and talk about the topics on the exam. We use Qwiklabs and the ‘A Cloud Guru’ course on Udemy and of course Linux Academy’s course and playground to reinforce what we’ve talked about for more practical experience and reinforcement.
Now, back to the Cloud Practitioner exam. I will admit, I like getting information from various sources. I tend to grasp certain topics better when the delivery comes in several formats (blogs, books, videos, podcasts, tutorials, flash cards). I also tried Amazon Polly which translates my notes to speech. It was such a hit! With just a few tags to make the speech more ‘human’, I was able to listen to my notes on my commute using MP3s downloaded from S3 (AWS Simple Storage Service).
I can say, with 100% certainty, that the icing on my studying cake was watching the AWS Cloud Practitioner Essentials course on their site. This was what I watched in the days leading up to the exam in addition to taking practice exams on the Pearson website.
I didn’t fully grasp IAM roles and polices until I watched the Identity and Access Management video in the ‘AWS Cloud Practitioner Essentials: Security’ video by Blaine Sundrud. His explanations and white-boarding really hit it home for me. Also, what gave me confidence on the understanding the Well-Architected Framework (on top of having read it) was the video on AWS. I recommend watching it and reading it as well. These concepts are important to grasp.
I stayed up late the night before and got up early on test day just to watch more of the videos on AWS. I also did a few more runs on the practice tests, scoring 93-100% all the way. I felt ready. I got a few good luck emails from the CTO, my boss and a few team mates. I got to the PearsonVue location early and was ready to go. The wait almost did me in. With 2 people ahead of me for my 9AM test appointment, I didn’t get into my test chair until 9:30.
Once there, I was off and running. I was done in almost 30 minutes, but marked several for review. After reviewing about 10 or so questions, I started from the beginning and went over each question again. They gave my 90 minutes, so I used an hour of it. I wasn’t in a rush after having waited over 30 minutes to get to my workstation. When it was all over, I exhaled, ended the exam and found out I passed.
My main advice to anyone studying for either exam is to practice. Go through the console and get an idea of where everything is. Then step through creating the resources and getting a feel for what the configurations looks like. Know the terms and their nuances, you will be tested on similar ‘feeling ‘ terms, so know what they mean emphatically. Read the FAQs for key services and don’t forget to commit the Well-Architected Framework to memory. In my opinion, your success on your practice exams will closely mimic your success on the real exams. Sadly, my results haven’t been posted to my account yet. I was hoping they’d be up before the AWS conference next week so I could stunt in the certification lounge.
The resources I used for the Cloud Practitioner exam are as follows:
AWS Well-Architected framework training course (free)
Good old fashioned flashcards (free)
Copious notes (free)
You don’ t need all of this to pass this exam. I’m just fortunate to have access to so many resources, so find what works within your budget and work hard. To find out more about AWS certifications and to register for an exam, visit AWS training and certification and set up and account.
Good luck and I’ll see you when I’ve taken the SA exam.
*UPDATE* I got my results! Now it’s official. I can proudly flex my badge and get into the certification lounge at AWS re:Inforce and re:Invent.
I’m just settling back into the office after attending my first Red Hat Summit. What an adventure!!! When I first walked into the Boston Convention Center, I knew I was in the right place. Welcomed by red carpets everywhere and the spiffy new logo, with a nod to accessibility right as I entered, I knew I was where I was supposed to be. My flight got me in Monday afternoon and made it to the First Timers Reception. After I picked up my registration badge and backpack, took obligatory selfies, I headed to the shindig.
A huge breakout room with a multiple bars, food stations on my left and right and wait staff walking around with snacks, this room was buzzing. I knew for certain I didn’t know anyone here, so I grabbed a beer and looked for a place to hang out. Almost immediately, I was chatting it up with a few guys who were Red Hat Accelerators.
“The Red Hat Accelerators (RHA) program is a global customer network of Red Hat (RH) technology experts and enthusiasts who willingly share their IT knowledge and expertise with peers in the industry, the community, and with Red Hat.”
From Red Hat
We did intros, laughed and talked until the end of the reception. One guy asked if I’d be interested in becoming an accelerator and gave me a card that had a QR code on it to find out more about the program. I was sure I wasn’t interested, but I took the card anyway and promised to check it out.
I
didn’t get into many of the sessions I’d hoped to. I’d built my agenda
in advance, but so many were filled up very fast. Success struck on a
few, so I was able to build an agenda that kept me busy all week. Here
is a peek and some of the sessions I attended.
Day 1
Winning With Red Hat Enterprise Linux- As told by Our most fanatical customers. (slides)
Wouldn’t you know, I few of the guys I’d met at the reception was on this panel. It was moderated by Marc Richter, a Red Hat Senior Technical Account Manager (TAM) and it was a Q&A about all things Red Hat. The panel talked about their joys and pain points of using RH and features and services they’d like to see improve. It was a lively session and it was really a great way to ease into the fire hose that was about to open on me.
Making DevOps managed services work for you
Another panel that talked about how they are implementing DevOps in their environment. I heard so many buzzwords, but one that kept being repeated was ‘velocity’. The context clues gave me an idea, but I was so surprised at just how many times it was being uttered. I guess DevOps = Velocity. However, all I kept thinking about was the Velocity Conference that was coming in June and that I wouldn’t be going to because I was slated to attend AWS Public Sector Summit in DC at the same time.
Move at the speed of DevOps:
This
session really piqued my interest. I’m new to DevOps and I’ve been
tasked with developing an environment for our developers and this seemed
like the place to start. Greeted with a full house, I guessed so many
others were in my boat and wanted to learn more. Text heavy slides with
small fonts that I couldn’t make out, I tried to take a few photos to
record some of the nuggets. Sadly, I couldn’t read them and they hadn’t
released the slide deck as I’m writing this post. One acronym they went
over was PACE, which focused on 4 key pivots on the path to DevOps:
Process
Architecture
Culture
engineering
As the talk when on, it started to feel more and more like a sales pitch. I even asked the guy next to me and he felt the same way. The slide deck hasn’t been released yet for this talk, but I’ll surely keep my eye out for this one.
Although this was only a mini session, I wanted to attend to see what could be done with both Puppet and Ansible. We currently use Puppet for configuration management for Linux systems, but I’m 1) looking for more insight into Puppet and 2) Discover how the two can work together. This session talked about how Red Hat started with CFEngine first, and the migration to Puppet. Puppet language is hard to learn and dependency ordering proved challenging. It was also slow and didn’t scale well. After 10 years with Puppet, they found it even harder to learn and updating modules to 5.x was nearly impossible.
Then they moved to a hybrid approach with Puppet and Ansible. Ansible was great for orchestration and ordering dependencies was much easier. You just wrote the playbook and you were on your way. Ansible was also much easier to learn. Then came Ansible Tower. With the ability to centralize playbook execution and manage credentials, Ansible became an important part of configuration management. However, it was not going to replace Puppet. Hiera proved better at managing environment data and Puppet ERB templating was better than Jing2. If you don’t use Ansible Tower, centralized auditing could prove challenging.
Puppet Pain Points
Red Hat has a podcast that I listen to called, Command Line Heroes and its hosted by the one and only Saron Yitbarek of #CodeNewbie fame. They had a space set up at at the conference and with command line video games, interviews for the podcast and 2 people doing caricatures. I did a quick interview and had my caricature drawn as I talked. Pretty neat, huh?
Desperately seeking DevOps: How do you change the way you work? (notes)
This session wasn’t what I was expecting. It was a “birds-of-a-feather” session which meant you were in a room with other Red Hat customers that were also desperately seeking DevOps. This session started kind of like an ‘unconference’, where you wrote down topics that you’d like to talk about and the attendees would vote for 3 suggestions each by putting a dot on the post-it note that had an idea they would be interested in talking about. One of my topics got picked for discussion. You’ve decided on the DevOps approach, now what? Sadly, it was less about what tools folks were using and the onboarding involved, but more about how to change the culture in order to even begin the process. Bigger organization seemed to have much more friction with changing mind sets than anything else. One company did find success with making sure teams were trained. They took time to level the team up with workshops on things like Python, learning the tools, etc. More time up front yielded better results from the teams involved. Also, developers had to be included in this training because they had to learn and be responsible for their role in the pipeline.
Voting on which topics to discuss
At the end of a very long day, they held the General Sessions which lasted 2 hours. Seating was difficult so I sat on the floor in the back of the hall. After that became impossible, I found the Active Lounge. It was empty and had a TV casting the session. I put my feet up and watched. There was a conversation between the Red Hat CEO, Jim Whitehurst and IBM CEO, Ginny Rometty. Fun fact, I met Ginny at Microsoft Ignite in 2017. There was a women’s lunch during the conference and it was very well attended. I’d registered well in advance, but it was hard to find seating. As the usher walked me around the room, Ginny invited me to sit at her table with her and her colleagues in the front of the room. It was an honor and a very nice gesture. Of course I stuck out like a sore thumb in my jeans and t-shirt while the ladies were resplendent in their dresses and suits, but I didn’t care, I had the best seat in the house! Anyway, I digress. They spoke about the history of open source and how both companies invested in it over the years. Jim said, “Open source is the vehicle for innovations that matter” and that got a big round of applause. Ginny got her own round of applause when she said, “We’d like to keep Red Hat separate”.
A Round of Applause for keeping Red Hat separate from IBM
Another great conversation was Jim speaking with Satya Nadella, the CEO of Microsoft and their commitment to open source, the flexibility of hybrid and edge cloud. There was also the big announcement of general availability (GA) of the Azure Open Shift Service. Many CEOs got on stage and talked about how they’re using Red Hat products to further their own innovations. Other big announcements were GA for Red Hat Enterprise Linux 8 (RHEL 8), the Red Hat Universal Base Image (UBI) and Red Hat Insights adding support for Microsoft SQL Server for Linux.()
As the keynote wrapped up, the welcome reception that followed was held in the expo hall. Vendors we’ve all heard of had booths and plenty of swag to give away.
Day 2
Mental shift from system admin to system architect:
This
session was more from the move from sysadmin to leading a team, but I
found some nuggets that proved helpful. Thinking in terms of growth for
the team and identifying my own strengths and weaknesses aids that
shift. Also keeping up on tried and true technical skills as well as
developing new skills. Lastly, creating a roadmap for future growth and reviewing it often to stay on track.
Next, I rounded off my morning with 2 session on open organizations and open leaders.
Giving people a voice in an open organization: (slides)
We’ve all been in an organization where there is one person that is always talking and there are some folks that never speak up. This session focused on how to “create an environment where everyone is heard equally”. Led by 2 Red Hat staffers, this session talked about processes that helped teams communicate better and “collaborate on equal terms regardless of tenure”. This session spoke to me. I’ve seen this dynamic where there is someone that is always talking and over the years, I’ve tried to be the person in the room that pays attention to faces. Sometimes faces change with they agree or disagree with something that’s being said. I try to direct questions to people who look like they have something to say and keep them included in the conversation. I know how it is to be the quiet one in the room and those small gestures are sometimes just enough to make a space more inclusive.
Beyond engagement: What open leaders need to know about empowering others (slides)
This session followed up the previous session quite nicely. Led by another Red Hat employee, this one talked of how we empower individuals just as much as we empower teams. She talked about how leaders should build other leaders to both further the growth of the company but also, growth of the individual. This helps “every individual reach his or her potential”. She provided a link to the open organization maturity model, a framework that Red Hat uses to help teams, individuals and the organization as a whole become more transparent and inclusive.
One thing this session reminded me of was Proverb 27:17 As iron sharpens iron, so one person sharpens another.
Puppet and Ansible in Foreman:
This session was in the community theater in the expo hall and the sound was so poor, I couldn’t hear the speaker. I didn’t know what Foreman was, so I had to look it up. Foreman is a server lifecycle management and orchestration tools. It is commonly used with Puppet and Chef. It would have been nice to learn more.
Since
I was in the expo hall, I skipped my last session of the day and
explored the expo hall. I took some hands-on demos in the Red Hat Booth
and got t-shirt for completing each one. Once you did 3, you got a
hoodie as a bonus!
Day 3:
Centos 8 and Beyond:
This was another session held in the expo hall and it was hard to hear the speaker. It was mostly a Q&A between the speaker, a Red Hat employees who works on the Centos project and the audience. There is no ETA on the release of version 8, but it’s coming. There was an overview of the build process, how Centos is working with Fedora as well as the steps to get it to GA.
Open Management: The next frontier in open culture:
This was a panel discussion of Red Hat managers and they did a Q&A on how they are using the open culture to support teams and individuals as well as how managers are still necessary and what part they play in an open culture. They’d have a question on a slide and each manager got a chance to reply. A few nuggets I took away were that empathy is a necessary quality in a manger. How can you manage humans if you have no empathy? Not having a empathetic manager is a cancer IMO. Why would you even try if your manager doesn’t exhibit empathy. If they don’t care, how can you?
In between sessions, I check twitter and I’d discovered that Linux Academy just added RHEL 8 to the cloud playground!
Manage Windows with Ansible: The what, the why, and the how? (slides)
This was the session I’d looked forward to the most. I want to do more automation in deploying Windows systems here, and we’re just not there yet. I’d already heard about how well Puppet and Ansible work together, but this session just focused on Ansible and Windows. There were a few code examples on how the declarative language of Ansible can help build, patch, secure and configure Windows systems. Also, tips on software management and how great PowerShell DSC is with Ansible. After sitting through this packed session, It was good to know that Linux wasn’t the only party in town and the automation pain points for Windows administration still exist, but there are ways to make it easier with Ansible. However, learning PowerShell and DSC and even more python are in order. To make writing playbooks even easier, there is a an Ansible extension for Visual Studio Code.
There was a call to action at the end of the talk to “Learn how to Ansible”. I’d used Ansible for Linux in a test environment before, but never in production and NEVER for Windows. I think I’ll take his advice and wade in.
Security-Enhanced Linux for mere mortals:
I’ll
admit it, I don’t get SELinux. It has been something always caused more
problems that it solved IMO. As a mere mortal, I signed up so I could
get a better understanding of how to enable it and not break everything.
The speaker understood the pain point of how hard it was to even grasp
SELinux, the documentation is the beginning was downright horrible, but over the years, it has gotten better and SELinux is much easier to manage.
He showed scenarios and demos on how to configure, set rules and policies and troubleshoot SELinux issues. I had to agree, it did SEEM easier in the situations he’d presented, but more understand would definitely be needed. I’d thought I would be able to walk away from the conference without one snarky comment about Windows Admins, but I was wrong. He had to throw in the tired old trope, “It’s so easy, even an Windows Admin can do it” at the very end. Alas, no conference is ever perfect.
SMH
I’d learned quite a bit about what tools and services Red Hat and the ecosystems surrounding them provide. There were so many that I hadn’t even heard of before, but I guess that’s why you go to conferences, to gain a new insight on what’s possible and find out more about things you only know by name. I’m looking forward to attending again. It was a great experience. Who knows, maybe next year I’ll be back as an accelerator.
I just got a call from the helpdesk that a tech wasn’t able to install from an MSI on the apps share from a file server. I checked his share & security permissions (OK) he could install other MSIs(OK)
I copied the file to the workstation he was working on and still the same error. Whelp, it’s not the share. Checked that he was in the local admins group on that Windows 10 workstation (OK) had him reboot and still no go. He was not able to run the MSI on this system. I had him launch an command prompt as administrator, browse to where I put the MSI and had him install using msiexec.
msiexec /a "NameOfInstaller.msi"
The /a option means to install it as administrator. It worked for him.
I want to run some
mysql command from my terminal, but each time I run mysql -u root to log in, I
get an error:
-bash: mysql:
command not found
This is because
mysql isn’t in my PATH. Your path is an
environment variable that holds directories that your computer will search
through to find executable files. You can review what’s in your path by
running:
# echo $PATH
This will show you
the list of directories that are in your path.
This variable is stored in your .bash_profile. On most macs, the
.bash_profile file is located in the root of your home directory. To view the
current .bash_profile file, go to your home directory
# ~
From there, run the
ls command to view all files.
# ls -alh
Near the top of the listing is the
.bash_profile file. Cat out the contents to see what’s in your path.
# cat .bash_profile
To update the file,
backup your current .bash_profile
# cp .bash_profile .bash_profile_backup
To locate your mysql
executable, use the locate command.
# locate mysql | less
Since I’m running
MAMP, my mysql executable is located in
/Applications/MAMP/Library/bin/mysql
To add this to the end my path, run the following command.
This takes the
output of the echo command and puts it into your .bash_profile. Now, you make
it persist by sourcing it. This alerts the current terminal to reload the file.
# source ~/.bash_profile
Now you can echo
path again to confirm or cat out your file
# echo $PATH
Now, we can use the
mysql command from our current location. Enter the password, if prompted.
Save and close this file and re-run sudo mysqld_safe –skip-grant-tables &
You should see a message that reads: mysqld_safe Started mysqld daemon with databases from /var/lib/mysql
Open another terminal and log into mysql
mysql -u root
Now you can reset your password. Run these commands:
mysql> use mysql;
mysql> UPDATE user SET password=PASSWORD('pass123') WHERE user='root';
mysql> FLUSH PRIVILEGES;
mysql> quit
If you get an error when running the UPDATE user command that reads: ERROR 1054 (42S22): Unknown column ‘password’ in ‘field list’, do this next.
Assuming this is for the mysql database:
mysql> use mysql;
mysql> show tables;
If you don’t see a row called password, but there is a user field, run this:
mysql> describe user;
There is a row called ‘authentication_string’ that holds the password.
You will need to run this command to reset the password. Just like the command above, but referencing the correct field in the user table.
mysql> use mysql;
mysql> UPDATE user SET authentication_string=PASSWORD('pass123') WHERE user='root';
FLUSH PRIVILEGES;
quit