Optimizing C# Applications
- Customer satisfaction
- Customers report performance problems
- Reduce churn rate
- Tip: Ask you users if they are leaving because of poor performance
- Raise conversion rate
- Consider the first impression potential users have from your software
- Tip: Ask your users why they are not buying
- Reduce TCO of your application
- Performance problems waste your user's time = money
- Reduce TCO for your customers by lowering system requirements
- Cloud environment is too expensive
- Add optimizations during initial development
- Aka premature optimization
- Write obvious (not naive) code first -> measure -> optimize if necessary
- Perf problems will always be where you don't expect them
- Optimize code without measuring
- Without measuring, optimized code is often slower
- Make sure to know if your optimization brought you closer to your goals
- Optimize for non-representative environments
- Specify problematic environments as accurate as possible
- Test your application on systems similar to your customers' environments
- Hardware, software, test data (consider data security)
- Optimization projects without concrete goals
- Add perf goals (quantifiable) in requirements
- You could spend endless time optimizing your applications
- Optimize to solve concrete problems (e.g. for memory, for throughput, for response time)
- Soft problems or goals
- Strive for quantifiable perf metrics in problem statements and goals
- Objective perf problems instead of subjective stories
- Optimize without a performance baseline
- Always know your performance baseline and compare against it
- Reproducible test scenarios are important
- Optimize without profound knowledge about your platform
- Know your runtime, platform, hardware, and tools
- Optimize the wrong places
- E.g. optimize C# code when you have a DB-related problem
- Spend enough time on root-cause analysis for your perf problems
- Ship debug builds
- Release builds are much faster than debug builds
- Optimize everything
- Focus on performance-critical aspects of your application instead
- Pareto principle (80/20)
- Architect without performance in mind
- Avoid architecture with inherent performance problems
- If necessary, consider prototyping in early project stages
- Confuse performance and user experience
- Async programming might not be faster but delivers better user experience
- Ignore Telemetry
- Real-world performance data (especially in SaaS-scenarios)
- Plan for it
- Put it on your backlog
- Get (time) budget for it (time-boxing); consider business case for your optimization project
- Follow Design-to-Cost approach
- Make yourself familiar with corresponding tools
- Prepare a defined, reproducible test scenario
- Hardware, software, network
- Test data (e.g. database)
- Application scenarios (automate if possible)
- Measure performance baseline
- E.g. CPU%, memory footprint, throughput, response time
- Define performance goals
- Must be measurable
- Involve stakeholders (e.g. product owners, customers, partners, etc.)
- Optimize - Measure - Analyze Cycle
- Don't change too many things at the same time
- Measure after optimizing
- Compare against baseline; if necessary, reset your baseline
- Check if you have reached performance goals/time-box
- Ask for feedback in real-world environments
- E.g. friendly customers, testing team
- Telemetry (e.g. Application Insights)
- Document and present your work
- Architecture, code, measurement results
- Potentially change your system requirements, guidelines for admins, etc.
- Share best/worst practices with your peers
- Ship your results
- Remember: Ship release builds
- Continuous deployment/short release cycles let customers benefit from perf optimizations
- Consider hotfixes
- Easy to build different execution environments
- Number of processors, RAM, different operating systems, etc.
- Performance of database clusters
- Don't wait for admins to setup/deliver test machines/VMs
- Design for scale-out and micro-services
- Easier to add/remove VMs/containers than scaling up/down
- Use micro-services and use e.g. Azure Websites or Docker to map to server farms
- Extremely cost efficient
- You only pay for the time your perf tests last
- You can use your partner benefits, BizSpark benefits, etc.
- Less data security issues if you use artificial test data
- Ability to run large-scale load tests
- Gather perf data during long-running, large-scale load tests
- SaaS enables you to optimize for a concrete environment
- Performance of storage system
- Database, file system, etc.
- Performance of services used
- E.g. external web services
- Network characteristics
- How chatty is your application?
- Latency, throughput, bandwidth
- Especially important in multi-tier applications
- Efficiency of your algorithms
- Core algorithms
- Parallel vs. sequential
- Platform characteristics
- JIT compiler
- Garbage collector
- Hardware
- Number of cores, 64 vs. 32 bits, RAM, SSDs, etc.
- Network connection to the database
- Latency, throughput
- Do you really need all the data you read from the database (e.g. unnecessary columns)?
- Generation of execution plan
- Statement parsing, compilation of execution plan
- Bound to CPU-power of database server
- Can't you simplify your query to speed up parse and compile time?
- Query execution
- Complexity of query, index optimization, etc.
- You might need a database expert/admin to tune your SQL statements
- Process DB results
- Turn DB results into .NET objects (O/R mappers)
- DB access characteristics
- Many small vs. few large statements
- Lazy loading
- DB latency influences DB access strategy
- How often do you call over the network?
- Latency, speed-of-light problem
- Ratio between latency and service operation
- Consider reducing network calls with caching (e.g. Redis cache)...
...but make sure that you cache doesn't make perf worse!
- How much data do you transfer?
- Transfer less data (e.g. unnecessary database columns)
- Make protocol more efficient (e.g. specific REST services or OData instead of generic services)
- Measuring is important
- The tools you use might do things you are not aware of (e.g. OR-mapper)
Tools
- Telerik Fiddler
- Web debugging proxy
- Wireshark
- Network packet analyzer
Image Source: https://msdn.microsoft.com/en-us/magazine/cc163791.aspx
- PreJITStub responsible for triggering JIT
- Overwritten with a jump to JIT compiled code
Tools
- Windows Performance Monitor (PerfMon)
- Gather telemetry of local system
- PerfView
- Free, low-level profiler for Windows
- Generates native images for assembly and dependencies
- Reference counting
- Advantages
- Better startup time (no JITing, faster assembly loading)
- Smaller memory footprint (code sharing between processes, important in RDS scenarios)
- Disadvantages
- NGEN has to be called (also for updates) - requires installer (incl. admin privileges)
- NGEN takes time (longer install time)
- NGEN images are larger on disk
- Native code slightly less performant than JIT'ed code
- CLR is a stack-based runtime
- Value types
- Managed heap
- Managed by the CLR
- Allocating memory is usually very fast
- When necessary (e.g. thresholds, memory pressure, etc.), unreferenced memory is freed
- Generations of objects
- Gen 0, 1, and 2
- Large objects (>85k bytes) are handled differently (large object heap)
- Different GC strategies
- Workstation (background) garbage collection
- Server garbage collection (optimized for throughput)
- Choose via config setting
- Concurrent collection for Gen 2 collections
- You can allocate small objects during Gen 2 collection
- Background GC
- For workstation in .NET >= 4, for server in .NET >= 4.5
- For details see MSDN
- Avoid allocating unnecessary memory
- This would raise GC pressure
- Consider weak references for large objects
- Reuse large objects
- Use memory perf counters for analysis
- See MSDN for details
- Be careful when inducing GC with GC.Collect
- Add GC.Collect only if you are sure that it makes sense
- Hunt memory leaks and remove them
- Suppress GC during perf critical operations
- Use GC latency modes for that
- Use this feature with care
- Prepare your optimization projects appropriately
- Write obvious code first
- Measure to find the right places to optimize
- Use profilers
- Make small steps and gather feedback
- Use the cloud