Feb 12 21:58:53.168385 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024
Feb 12 21:58:53.168418 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4
Feb 12 21:58:53.168434 kernel: BIOS-provided physical RAM map:
Feb 12 21:58:53.168445 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 12 21:58:53.168456 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 12 21:58:53.168467 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 12 21:58:53.168484 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable
Feb 12 21:58:53.168496 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved
Feb 12 21:58:53.168508 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved
Feb 12 21:58:53.168519 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 12 21:58:53.168531 kernel: NX (Execute Disable) protection: active
Feb 12 21:58:53.168542 kernel: SMBIOS 2.7 present.
Feb 12 21:58:53.168554 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017
Feb 12 21:58:53.168566 kernel: Hypervisor detected: KVM
Feb 12 21:58:53.168584 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 12 21:58:53.168597 kernel: kvm-clock: cpu 0, msr 60faa001, primary cpu clock
Feb 12 21:58:53.168609 kernel: kvm-clock: using sched offset of 6676876783 cycles
Feb 12 21:58:53.168623 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 12 21:58:53.168659 kernel: tsc: Detected 2500.006 MHz processor
Feb 12 21:58:53.168673 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 12 21:58:53.168689 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 12 21:58:53.168701 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000
Feb 12 21:58:53.168714 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 12 21:58:53.168727 kernel: Using GB pages for direct mapping
Feb 12 21:58:53.168896 kernel: ACPI: Early table checksum verification disabled
Feb 12 21:58:53.168910 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON)
Feb 12 21:58:53.168923 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001)
Feb 12 21:58:53.168936 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001)
Feb 12 21:58:53.168949 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001)
Feb 12 21:58:53.168965 kernel: ACPI: FACS 0x000000007D9EFF40 000040
Feb 12 21:58:53.168978 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Feb 12 21:58:53.168991 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Feb 12 21:58:53.169003 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001)
Feb 12 21:58:53.169016 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Feb 12 21:58:53.169029 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001)
Feb 12 21:58:53.169042 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001)
Feb 12 21:58:53.169055 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Feb 12 21:58:53.169071 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3]
Feb 12 21:58:53.169084 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488]
Feb 12 21:58:53.169097 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f]
Feb 12 21:58:53.169116 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39]
Feb 12 21:58:53.169129 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645]
Feb 12 21:58:53.169208 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf]
Feb 12 21:58:53.169223 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b]
Feb 12 21:58:53.169300 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7]
Feb 12 21:58:53.169316 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037]
Feb 12 21:58:53.169330 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba]
Feb 12 21:58:53.169345 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Feb 12 21:58:53.169358 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Feb 12 21:58:53.169372 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff]
Feb 12 21:58:53.169428 kernel: NUMA: Initialized distance table, cnt=1
Feb 12 21:58:53.169445 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff]
Feb 12 21:58:53.169463 kernel: Zone ranges:
Feb 12 21:58:53.169477 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 12 21:58:53.169492 kernel:   DMA32    [mem 0x0000000001000000-0x000000007d9e9fff]
Feb 12 21:58:53.169506 kernel:   Normal   empty
Feb 12 21:58:53.169519 kernel: Movable zone start for each node
Feb 12 21:58:53.169534 kernel: Early memory node ranges
Feb 12 21:58:53.169547 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 12 21:58:53.169562 kernel:   node   0: [mem 0x0000000000100000-0x000000007d9e9fff]
Feb 12 21:58:53.169576 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff]
Feb 12 21:58:53.169593 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 12 21:58:53.169607 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 12 21:58:53.169621 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges
Feb 12 21:58:53.169634 kernel: ACPI: PM-Timer IO Port: 0xb008
Feb 12 21:58:53.169648 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 12 21:58:53.169663 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
Feb 12 21:58:53.169677 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 12 21:58:53.169691 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 12 21:58:53.169705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 12 21:58:53.169721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 12 21:58:53.169735 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 12 21:58:53.169749 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Feb 12 21:58:53.169763 kernel: TSC deadline timer available
Feb 12 21:58:53.169776 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Feb 12 21:58:53.169790 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices
Feb 12 21:58:53.169803 kernel: Booting paravirtualized kernel on KVM
Feb 12 21:58:53.169817 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 12 21:58:53.169832 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
Feb 12 21:58:53.169848 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576
Feb 12 21:58:53.169862 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152
Feb 12 21:58:53.175924 kernel: pcpu-alloc: [0] 0 1 
Feb 12 21:58:53.175952 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0
Feb 12 21:58:53.175967 kernel: kvm-guest: PV spinlocks enabled
Feb 12 21:58:53.175981 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Feb 12 21:58:53.175995 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 506242
Feb 12 21:58:53.176009 kernel: Policy zone: DMA32
Feb 12 21:58:53.176025 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4
Feb 12 21:58:53.176047 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 12 21:58:53.176060 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 12 21:58:53.176074 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 12 21:58:53.176088 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 12 21:58:53.176102 kernel: Memory: 1936476K/2057760K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121024K reserved, 0K cma-reserved)
Feb 12 21:58:53.176116 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 12 21:58:53.176129 kernel: Kernel/User page tables isolation: enabled
Feb 12 21:58:53.176143 kernel: ftrace: allocating 34475 entries in 135 pages
Feb 12 21:58:53.176159 kernel: ftrace: allocated 135 pages with 4 groups
Feb 12 21:58:53.176173 kernel: rcu: Hierarchical RCU implementation.
Feb 12 21:58:53.176187 kernel: rcu:         RCU event tracing is enabled.
Feb 12 21:58:53.176201 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 12 21:58:53.176215 kernel:         Rude variant of Tasks RCU enabled.
Feb 12 21:58:53.176229 kernel:         Tracing variant of Tasks RCU enabled.
Feb 12 21:58:53.176242 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 12 21:58:53.176256 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 12 21:58:53.176270 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Feb 12 21:58:53.176286 kernel: random: crng init done
Feb 12 21:58:53.176299 kernel: Console: colour VGA+ 80x25
Feb 12 21:58:53.176313 kernel: printk: console [ttyS0] enabled
Feb 12 21:58:53.176326 kernel: ACPI: Core revision 20210730
Feb 12 21:58:53.176340 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns
Feb 12 21:58:53.176353 kernel: APIC: Switch to symmetric I/O mode setup
Feb 12 21:58:53.176367 kernel: x2apic enabled
Feb 12 21:58:53.176380 kernel: Switched APIC routing to physical x2apic.
Feb 12 21:58:53.176394 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns
Feb 12 21:58:53.176410 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500006)
Feb 12 21:58:53.176424 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Feb 12 21:58:53.176438 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Feb 12 21:58:53.176452 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 12 21:58:53.176475 kernel: Spectre V2 : Mitigation: Retpolines
Feb 12 21:58:53.176491 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb 12 21:58:53.176505 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb 12 21:58:53.176520 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Feb 12 21:58:53.176534 kernel: RETBleed: Vulnerable
Feb 12 21:58:53.176548 kernel: Speculative Store Bypass: Vulnerable
Feb 12 21:58:53.176562 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 12 21:58:53.176576 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 12 21:58:53.176590 kernel: GDS: Unknown: Dependent on hypervisor status
Feb 12 21:58:53.176604 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 12 21:58:53.176621 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 12 21:58:53.176643 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 12 21:58:53.176657 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
Feb 12 21:58:53.176671 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
Feb 12 21:58:53.176744 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Feb 12 21:58:53.176766 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Feb 12 21:58:53.176782 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Feb 12 21:58:53.176796 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Feb 12 21:58:53.176810 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 12 21:58:53.176825 kernel: x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
Feb 12 21:58:53.176839 kernel: x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
Feb 12 21:58:53.176853 kernel: x86/fpu: xstate_offset[5]:  960, xstate_sizes[5]:   64
Feb 12 21:58:53.176867 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]:  512
Feb 12 21:58:53.180928 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024
Feb 12 21:58:53.180943 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]:    8
Feb 12 21:58:53.180957 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format.
Feb 12 21:58:53.180971 kernel: Freeing SMP alternatives memory: 32K
Feb 12 21:58:53.180990 kernel: pid_max: default: 32768 minimum: 301
Feb 12 21:58:53.181004 kernel: LSM: Security Framework initializing
Feb 12 21:58:53.181017 kernel: SELinux:  Initializing.
Feb 12 21:58:53.181031 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb 12 21:58:53.181045 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb 12 21:58:53.181058 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7)
Feb 12 21:58:53.181072 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Feb 12 21:58:53.181086 kernel: signal: max sigframe size: 3632
Feb 12 21:58:53.181100 kernel: rcu: Hierarchical SRCU implementation.
Feb 12 21:58:53.181113 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Feb 12 21:58:53.181130 kernel: smp: Bringing up secondary CPUs ...
Feb 12 21:58:53.181144 kernel: x86: Booting SMP configuration:
Feb 12 21:58:53.181157 kernel: .... node  #0, CPUs:      #1
Feb 12 21:58:53.181171 kernel: kvm-clock: cpu 1, msr 60faa041, secondary cpu clock
Feb 12 21:58:53.181185 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0
Feb 12 21:58:53.181199 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Feb 12 21:58:53.181215 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Feb 12 21:58:53.181228 kernel: smp: Brought up 1 node, 2 CPUs
Feb 12 21:58:53.181241 kernel: smpboot: Max logical packages: 1
Feb 12 21:58:53.181258 kernel: smpboot: Total of 2 processors activated (10000.02 BogoMIPS)
Feb 12 21:58:53.181271 kernel: devtmpfs: initialized
Feb 12 21:58:53.181284 kernel: x86/mm: Memory block size: 128MB
Feb 12 21:58:53.181298 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 12 21:58:53.181312 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 12 21:58:53.181325 kernel: pinctrl core: initialized pinctrl subsystem
Feb 12 21:58:53.181339 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 12 21:58:53.181353 kernel: audit: initializing netlink subsys (disabled)
Feb 12 21:58:53.181366 kernel: audit: type=2000 audit(1707775131.883:1): state=initialized audit_enabled=0 res=1
Feb 12 21:58:53.181382 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 12 21:58:53.181394 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 12 21:58:53.181408 kernel: cpuidle: using governor menu
Feb 12 21:58:53.181423 kernel: ACPI: bus type PCI registered
Feb 12 21:58:53.181437 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 12 21:58:53.181450 kernel: dca service started, version 1.12.1
Feb 12 21:58:53.181465 kernel: PCI: Using configuration type 1 for base access
Feb 12 21:58:53.181482 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 12 21:58:53.181498 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Feb 12 21:58:53.181517 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Feb 12 21:58:53.181532 kernel: ACPI: Added _OSI(Module Device)
Feb 12 21:58:53.181546 kernel: ACPI: Added _OSI(Processor Device)
Feb 12 21:58:53.181559 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 12 21:58:53.181572 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 12 21:58:53.181585 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Feb 12 21:58:53.181598 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Feb 12 21:58:53.181611 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Feb 12 21:58:53.181624 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded
Feb 12 21:58:53.181640 kernel: ACPI: Interpreter enabled
Feb 12 21:58:53.181653 kernel: ACPI: PM: (supports S0 S5)
Feb 12 21:58:53.181667 kernel: ACPI: Using IOAPIC for interrupt routing
Feb 12 21:58:53.181680 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 12 21:58:53.181695 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F
Feb 12 21:58:53.181709 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 12 21:58:53.181931 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Feb 12 21:58:53.182063 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Feb 12 21:58:53.182084 kernel: acpiphp: Slot [3] registered
Feb 12 21:58:53.182100 kernel: acpiphp: Slot [4] registered
Feb 12 21:58:53.182114 kernel: acpiphp: Slot [5] registered
Feb 12 21:58:53.182129 kernel: acpiphp: Slot [6] registered
Feb 12 21:58:53.182143 kernel: acpiphp: Slot [7] registered
Feb 12 21:58:53.182158 kernel: acpiphp: Slot [8] registered
Feb 12 21:58:53.182172 kernel: acpiphp: Slot [9] registered
Feb 12 21:58:53.182187 kernel: acpiphp: Slot [10] registered
Feb 12 21:58:53.182201 kernel: acpiphp: Slot [11] registered
Feb 12 21:58:53.182218 kernel: acpiphp: Slot [12] registered
Feb 12 21:58:53.182233 kernel: acpiphp: Slot [13] registered
Feb 12 21:58:53.182247 kernel: acpiphp: Slot [14] registered
Feb 12 21:58:53.182262 kernel: acpiphp: Slot [15] registered
Feb 12 21:58:53.182277 kernel: acpiphp: Slot [16] registered
Feb 12 21:58:53.182291 kernel: acpiphp: Slot [17] registered
Feb 12 21:58:53.182306 kernel: acpiphp: Slot [18] registered
Feb 12 21:58:53.182320 kernel: acpiphp: Slot [19] registered
Feb 12 21:58:53.182334 kernel: acpiphp: Slot [20] registered
Feb 12 21:58:53.182351 kernel: acpiphp: Slot [21] registered
Feb 12 21:58:53.182365 kernel: acpiphp: Slot [22] registered
Feb 12 21:58:53.182380 kernel: acpiphp: Slot [23] registered
Feb 12 21:58:53.182394 kernel: acpiphp: Slot [24] registered
Feb 12 21:58:53.182409 kernel: acpiphp: Slot [25] registered
Feb 12 21:58:53.182423 kernel: acpiphp: Slot [26] registered
Feb 12 21:58:53.182437 kernel: acpiphp: Slot [27] registered
Feb 12 21:58:53.182452 kernel: acpiphp: Slot [28] registered
Feb 12 21:58:53.182466 kernel: acpiphp: Slot [29] registered
Feb 12 21:58:53.182480 kernel: acpiphp: Slot [30] registered
Feb 12 21:58:53.182496 kernel: acpiphp: Slot [31] registered
Feb 12 21:58:53.182588 kernel: PCI host bridge to bus 0000:00
Feb 12 21:58:53.182728 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 12 21:58:53.182839 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 12 21:58:53.182958 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 12 21:58:53.183064 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Feb 12 21:58:53.183170 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 12 21:58:53.183309 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb 12 21:58:53.183438 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Feb 12 21:58:53.183710 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000
Feb 12 21:58:53.183841 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Feb 12 21:58:53.183973 kernel: pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
Feb 12 21:58:53.184092 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff
Feb 12 21:58:53.184210 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff
Feb 12 21:58:53.184397 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff
Feb 12 21:58:53.184525 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff
Feb 12 21:58:53.184722 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff
Feb 12 21:58:53.193971 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff
Feb 12 21:58:53.194173 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 11718 usecs
Feb 12 21:58:53.194321 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000
Feb 12 21:58:53.194450 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref]
Feb 12 21:58:53.194664 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
Feb 12 21:58:53.194795 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 12 21:58:53.197153 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Feb 12 21:58:53.197297 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff]
Feb 12 21:58:53.197476 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Feb 12 21:58:53.197605 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff]
Feb 12 21:58:53.197632 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 12 21:58:53.197648 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 12 21:58:53.197663 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 12 21:58:53.197678 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 12 21:58:53.197693 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 12 21:58:53.197707 kernel: iommu: Default domain type: Translated 
Feb 12 21:58:53.197722 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Feb 12 21:58:53.197852 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device
Feb 12 21:58:53.198080 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 12 21:58:53.198271 kernel: pci 0000:00:03.0: vgaarb: bridge control possible
Feb 12 21:58:53.198292 kernel: vgaarb: loaded
Feb 12 21:58:53.198308 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 12 21:58:53.198323 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 12 21:58:53.198338 kernel: PTP clock support registered
Feb 12 21:58:53.198353 kernel: PCI: Using ACPI for IRQ routing
Feb 12 21:58:53.198368 kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 12 21:58:53.198383 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 12 21:58:53.198400 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff]
Feb 12 21:58:53.198415 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
Feb 12 21:58:53.198431 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter
Feb 12 21:58:53.198445 kernel: clocksource: Switched to clocksource kvm-clock
Feb 12 21:58:53.198460 kernel: VFS: Disk quotas dquot_6.6.0
Feb 12 21:58:53.198476 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 12 21:58:53.198491 kernel: pnp: PnP ACPI init
Feb 12 21:58:53.198505 kernel: pnp: PnP ACPI: found 5 devices
Feb 12 21:58:53.198520 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 12 21:58:53.198538 kernel: NET: Registered PF_INET protocol family
Feb 12 21:58:53.198553 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 12 21:58:53.198568 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Feb 12 21:58:53.198583 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 12 21:58:53.198597 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 12 21:58:53.198613 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
Feb 12 21:58:53.198628 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Feb 12 21:58:53.198643 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb 12 21:58:53.198658 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb 12 21:58:53.198676 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 12 21:58:53.198691 kernel: NET: Registered PF_XDP protocol family
Feb 12 21:58:53.198814 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 12 21:58:53.198942 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 12 21:58:53.199055 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 12 21:58:53.199202 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Feb 12 21:58:53.199336 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 12 21:58:53.199466 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds
Feb 12 21:58:53.199490 kernel: PCI: CLS 0 bytes, default 64
Feb 12 21:58:53.199505 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Feb 12 21:58:53.199521 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns
Feb 12 21:58:53.199537 kernel: clocksource: Switched to clocksource tsc
Feb 12 21:58:53.199551 kernel: Initialise system trusted keyrings
Feb 12 21:58:53.199566 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Feb 12 21:58:53.199581 kernel: Key type asymmetric registered
Feb 12 21:58:53.199596 kernel: Asymmetric key parser 'x509' registered
Feb 12 21:58:53.199614 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Feb 12 21:58:53.199628 kernel: io scheduler mq-deadline registered
Feb 12 21:58:53.199643 kernel: io scheduler kyber registered
Feb 12 21:58:53.199657 kernel: io scheduler bfq registered
Feb 12 21:58:53.199672 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb 12 21:58:53.199687 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 12 21:58:53.199702 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 12 21:58:53.199716 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 12 21:58:53.199805 kernel: i8042: Warning: Keylock active
Feb 12 21:58:53.199822 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 12 21:58:53.199837 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 12 21:58:53.207170 kernel: rtc_cmos 00:00: RTC can wake from S4
Feb 12 21:58:53.207457 kernel: rtc_cmos 00:00: registered as rtc0
Feb 12 21:58:53.207585 kernel: rtc_cmos 00:00: setting system clock to 2024-02-12T21:58:52 UTC (1707775132)
Feb 12 21:58:53.207702 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Feb 12 21:58:53.207722 kernel: intel_pstate: CPU model not supported
Feb 12 21:58:53.207738 kernel: NET: Registered PF_INET6 protocol family
Feb 12 21:58:53.207761 kernel: Segment Routing with IPv6
Feb 12 21:58:53.207776 kernel: In-situ OAM (IOAM) with IPv6
Feb 12 21:58:53.207791 kernel: NET: Registered PF_PACKET protocol family
Feb 12 21:58:53.207806 kernel: Key type dns_resolver registered
Feb 12 21:58:53.207821 kernel: IPI shorthand broadcast: enabled
Feb 12 21:58:53.207837 kernel: sched_clock: Marking stable (445931977, 328427598)->(894089829, -119730254)
Feb 12 21:58:53.207853 kernel: registered taskstats version 1
Feb 12 21:58:53.207868 kernel: Loading compiled-in X.509 certificates
Feb 12 21:58:53.207912 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8'
Feb 12 21:58:53.207930 kernel: Key type .fscrypt registered
Feb 12 21:58:53.207945 kernel: Key type fscrypt-provisioning registered
Feb 12 21:58:53.207961 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 12 21:58:53.207976 kernel: ima: Allocated hash algorithm: sha1
Feb 12 21:58:53.207991 kernel: ima: No architecture policies found
Feb 12 21:58:53.208007 kernel: Freeing unused kernel image (initmem) memory: 45496K
Feb 12 21:58:53.208022 kernel: Write protecting the kernel read-only data: 28672k
Feb 12 21:58:53.208037 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Feb 12 21:58:53.208052 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K
Feb 12 21:58:53.208069 kernel: Run /init as init process
Feb 12 21:58:53.208084 kernel:   with arguments:
Feb 12 21:58:53.208099 kernel:     /init
Feb 12 21:58:53.208114 kernel:   with environment:
Feb 12 21:58:53.208129 kernel:     HOME=/
Feb 12 21:58:53.208144 kernel:     TERM=linux
Feb 12 21:58:53.208158 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 12 21:58:53.208177 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 12 21:58:53.208199 systemd[1]: Detected virtualization amazon.
Feb 12 21:58:53.208216 systemd[1]: Detected architecture x86-64.
Feb 12 21:58:53.208232 systemd[1]: Running in initrd.
Feb 12 21:58:53.208248 systemd[1]: No hostname configured, using default hostname.
Feb 12 21:58:53.208623 systemd[1]: Hostname set to <localhost>.
Feb 12 21:58:53.208657 systemd[1]: Initializing machine ID from VM UUID.
Feb 12 21:58:53.208673 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Feb 12 21:58:53.208690 systemd[1]: Queued start job for default target initrd.target.
Feb 12 21:58:53.208706 systemd[1]: Started systemd-ask-password-console.path.
Feb 12 21:58:53.208722 systemd[1]: Reached target cryptsetup.target.
Feb 12 21:58:53.208738 systemd[1]: Reached target paths.target.
Feb 12 21:58:53.208753 systemd[1]: Reached target slices.target.
Feb 12 21:58:53.208769 systemd[1]: Reached target swap.target.
Feb 12 21:58:53.208786 systemd[1]: Reached target timers.target.
Feb 12 21:58:53.208805 systemd[1]: Listening on iscsid.socket.
Feb 12 21:58:53.208821 systemd[1]: Listening on iscsiuio.socket.
Feb 12 21:58:53.208841 systemd[1]: Listening on systemd-journald-audit.socket.
Feb 12 21:58:53.208857 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb 12 21:58:53.208884 systemd[1]: Listening on systemd-journald.socket.
Feb 12 21:58:53.208901 systemd[1]: Listening on systemd-networkd.socket.
Feb 12 21:58:53.208917 systemd[1]: Listening on systemd-udevd-control.socket.
Feb 12 21:58:53.208930 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb 12 21:58:53.208946 systemd[1]: Reached target sockets.target.
Feb 12 21:58:53.208959 systemd[1]: Starting kmod-static-nodes.service...
Feb 12 21:58:53.208976 systemd[1]: Finished network-cleanup.service.
Feb 12 21:58:53.208994 systemd[1]: Starting systemd-fsck-usr.service...
Feb 12 21:58:53.209013 systemd[1]: Starting systemd-journald.service...
Feb 12 21:58:53.209030 systemd[1]: Starting systemd-modules-load.service...
Feb 12 21:58:53.209048 systemd[1]: Starting systemd-resolved.service...
Feb 12 21:58:53.209066 systemd[1]: Starting systemd-vconsole-setup.service...
Feb 12 21:58:53.209082 systemd[1]: Finished kmod-static-nodes.service.
Feb 12 21:58:53.209100 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 12 21:58:53.209117 kernel: Bridge firewalling registered
Feb 12 21:58:53.209138 systemd-journald[185]: Journal started
Feb 12 21:58:53.209217 systemd-journald[185]: Runtime Journal (/run/log/journal/ec293d392ada477961c42c9aa1975e2f) is 4.8M, max 38.7M, 33.9M free.
Feb 12 21:58:53.149222 systemd-modules-load[186]: Inserted module 'overlay'
Feb 12 21:58:53.370227 kernel: SCSI subsystem initialized
Feb 12 21:58:53.370260 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 12 21:58:53.370282 kernel: device-mapper: uevent: version 1.0.3
Feb 12 21:58:53.370302 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Feb 12 21:58:53.370320 systemd[1]: Started systemd-journald.service.
Feb 12 21:58:53.370343 kernel: audit: type=1130 audit(1707775133.362:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.192138 systemd-modules-load[186]: Inserted module 'br_netfilter'
Feb 12 21:58:53.242855 systemd-modules-load[186]: Inserted module 'dm_multipath'
Feb 12 21:58:53.245978 systemd-resolved[187]: Positive Trust Anchors:
Feb 12 21:58:53.387297 kernel: audit: type=1130 audit(1707775133.372:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.387366 kernel: audit: type=1130 audit(1707775133.374:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.245989 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 12 21:58:53.400183 kernel: audit: type=1130 audit(1707775133.386:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.400217 kernel: audit: type=1130 audit(1707775133.387:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.246037 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb 12 21:58:53.250559 systemd-resolved[187]: Defaulting to hostname 'linux'.
Feb 12 21:58:53.428589 kernel: audit: type=1130 audit(1707775133.421:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.374890 systemd[1]: Started systemd-resolved.service.
Feb 12 21:58:53.376373 systemd[1]: Finished systemd-fsck-usr.service.
Feb 12 21:58:53.387508 systemd[1]: Finished systemd-modules-load.service.
Feb 12 21:58:53.388760 systemd[1]: Finished systemd-vconsole-setup.service.
Feb 12 21:58:53.428807 systemd[1]: Reached target nss-lookup.target.
Feb 12 21:58:53.430865 systemd[1]: Starting dracut-cmdline-ask.service...
Feb 12 21:58:53.432963 systemd[1]: Starting systemd-sysctl.service...
Feb 12 21:58:53.434218 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb 12 21:58:53.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.451283 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb 12 21:58:53.458362 kernel: audit: type=1130 audit(1707775133.451:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.467904 kernel: audit: type=1130 audit(1707775133.460:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.462587 systemd[1]: Finished systemd-sysctl.service.
Feb 12 21:58:53.472004 systemd[1]: Finished dracut-cmdline-ask.service.
Feb 12 21:58:53.473185 systemd[1]: Starting dracut-cmdline.service...
Feb 12 21:58:53.481056 kernel: audit: type=1130 audit(1707775133.470:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.486439 dracut-cmdline[206]: dracut-dracut-053
Feb 12 21:58:53.489540 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4
Feb 12 21:58:53.576895 kernel: Loading iSCSI transport class v2.0-870.
Feb 12 21:58:53.589890 kernel: iscsi: registered transport (tcp)
Feb 12 21:58:53.616906 kernel: iscsi: registered transport (qla4xxx)
Feb 12 21:58:53.616980 kernel: QLogic iSCSI HBA Driver
Feb 12 21:58:53.659525 systemd[1]: Finished dracut-cmdline.service.
Feb 12 21:58:53.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:53.661208 systemd[1]: Starting dracut-pre-udev.service...
Feb 12 21:58:53.727090 kernel: raid6: avx512x4 gen() 11605 MB/s
Feb 12 21:58:53.745035 kernel: raid6: avx512x4 xor()  6314 MB/s
Feb 12 21:58:53.762923 kernel: raid6: avx512x2 gen() 12555 MB/s
Feb 12 21:58:53.780929 kernel: raid6: avx512x2 xor() 19485 MB/s
Feb 12 21:58:53.799556 kernel: raid6: avx512x1 gen() 11365 MB/s
Feb 12 21:58:53.815928 kernel: raid6: avx512x1 xor() 20459 MB/s
Feb 12 21:58:53.832925 kernel: raid6: avx2x4   gen() 16313 MB/s
Feb 12 21:58:53.850913 kernel: raid6: avx2x4   xor()  6603 MB/s
Feb 12 21:58:53.867923 kernel: raid6: avx2x2   gen() 15979 MB/s
Feb 12 21:58:53.885914 kernel: raid6: avx2x2   xor() 16869 MB/s
Feb 12 21:58:53.902927 kernel: raid6: avx2x1   gen() 12281 MB/s
Feb 12 21:58:53.920036 kernel: raid6: avx2x1   xor() 14615 MB/s
Feb 12 21:58:53.938016 kernel: raid6: sse2x4   gen()  8958 MB/s
Feb 12 21:58:53.956931 kernel: raid6: sse2x4   xor()  3750 MB/s
Feb 12 21:58:53.973920 kernel: raid6: sse2x2   gen()  7138 MB/s
Feb 12 21:58:53.990917 kernel: raid6: sse2x2   xor()  5670 MB/s
Feb 12 21:58:54.008926 kernel: raid6: sse2x1   gen()  7731 MB/s
Feb 12 21:58:54.027219 kernel: raid6: sse2x1   xor()  3886 MB/s
Feb 12 21:58:54.027339 kernel: raid6: using algorithm avx2x4 gen() 16313 MB/s
Feb 12 21:58:54.027660 kernel: raid6: .... xor() 6603 MB/s, rmw enabled
Feb 12 21:58:54.028857 kernel: raid6: using avx512x2 recovery algorithm
Feb 12 21:58:54.043895 kernel: xor: automatically using best checksumming function   avx       
Feb 12 21:58:54.179863 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Feb 12 21:58:54.190414 systemd[1]: Finished dracut-pre-udev.service.
Feb 12 21:58:54.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:54.190000 audit: BPF prog-id=7 op=LOAD
Feb 12 21:58:54.190000 audit: BPF prog-id=8 op=LOAD
Feb 12 21:58:54.192974 systemd[1]: Starting systemd-udevd.service...
Feb 12 21:58:54.213041 systemd-udevd[383]: Using default interface naming scheme 'v252'.
Feb 12 21:58:54.226313 systemd[1]: Started systemd-udevd.service.
Feb 12 21:58:54.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:54.230274 systemd[1]: Starting dracut-pre-trigger.service...
Feb 12 21:58:54.250896 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation
Feb 12 21:58:54.295036 systemd[1]: Finished dracut-pre-trigger.service.
Feb 12 21:58:54.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:54.299427 systemd[1]: Starting systemd-udev-trigger.service...
Feb 12 21:58:54.347607 systemd[1]: Finished systemd-udev-trigger.service.
Feb 12 21:58:54.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:54.414917 kernel: cryptd: max_cpu_qlen set to 1000
Feb 12 21:58:54.432891 kernel: AVX2 version of gcm_enc/dec engaged.
Feb 12 21:58:54.454102 kernel: AES CTR mode by8 optimization enabled
Feb 12 21:58:54.461993 kernel: ena 0000:00:05.0: ENA device version: 0.10
Feb 12 21:58:54.462269 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Feb 12 21:58:54.466809 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
Feb 12 21:58:54.469895 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:a5:30:ee:51:43
Feb 12 21:58:54.471679 (udev-worker)[440]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 21:58:54.688395 kernel: nvme nvme0: pci function 0000:00:04.0
Feb 12 21:58:54.688725 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 12 21:58:54.688755 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Feb 12 21:58:54.688918 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 12 21:58:54.688935 kernel: GPT:9289727 != 16777215
Feb 12 21:58:54.688950 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 12 21:58:54.688965 kernel: GPT:9289727 != 16777215
Feb 12 21:58:54.688979 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 12 21:58:54.688995 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 12 21:58:54.689013 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (431)
Feb 12 21:58:54.631790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb 12 21:58:54.697203 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Feb 12 21:58:54.706523 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Feb 12 21:58:54.719392 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Feb 12 21:58:54.722908 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Feb 12 21:58:54.726076 systemd[1]: Starting disk-uuid.service...
Feb 12 21:58:54.733205 disk-uuid[590]: Primary Header is updated.
Feb 12 21:58:54.733205 disk-uuid[590]: Secondary Entries is updated.
Feb 12 21:58:54.733205 disk-uuid[590]: Secondary Header is updated.
Feb 12 21:58:54.740891 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 12 21:58:54.745159 kernel: GPT:disk_guids don't match.
Feb 12 21:58:54.745212 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 12 21:58:54.745237 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 12 21:58:54.753897 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 12 21:58:55.751898 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 12 21:58:55.752016 disk-uuid[591]: The operation has completed successfully.
Feb 12 21:58:55.890977 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 12 21:58:55.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:55.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:55.891157 systemd[1]: Finished disk-uuid.service.
Feb 12 21:58:55.898606 systemd[1]: Starting verity-setup.service...
Feb 12 21:58:55.931908 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Feb 12 21:58:56.019177 systemd[1]: Found device dev-mapper-usr.device.
Feb 12 21:58:56.020398 systemd[1]: Mounting sysusr-usr.mount...
Feb 12 21:58:56.025779 systemd[1]: Finished verity-setup.service.
Feb 12 21:58:56.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:56.123939 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Feb 12 21:58:56.123911 systemd[1]: Mounted sysusr-usr.mount.
Feb 12 21:58:56.125560 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Feb 12 21:58:56.128182 systemd[1]: Starting ignition-setup.service...
Feb 12 21:58:56.130569 systemd[1]: Starting parse-ip-for-networkd.service...
Feb 12 21:58:56.169529 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 12 21:58:56.169599 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 12 21:58:56.169620 kernel: BTRFS info (device nvme0n1p6): has skinny extents
Feb 12 21:58:56.180896 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 12 21:58:56.195273 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 12 21:58:56.217206 systemd[1]: Finished ignition-setup.service.
Feb 12 21:58:56.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:56.219080 systemd[1]: Starting ignition-fetch-offline.service...
Feb 12 21:58:56.263486 systemd[1]: Finished parse-ip-for-networkd.service.
Feb 12 21:58:56.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:56.264000 audit: BPF prog-id=9 op=LOAD
Feb 12 21:58:56.271640 systemd[1]: Starting systemd-networkd.service...
Feb 12 21:58:56.335065 systemd-networkd[1103]: lo: Link UP
Feb 12 21:58:56.335076 systemd-networkd[1103]: lo: Gained carrier
Feb 12 21:58:56.337309 systemd-networkd[1103]: Enumeration completed
Feb 12 21:58:56.337430 systemd[1]: Started systemd-networkd.service.
Feb 12 21:58:56.338709 systemd-networkd[1103]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 12 21:58:56.342175 systemd[1]: Reached target network.target.
Feb 12 21:58:56.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:56.346068 systemd[1]: Starting iscsiuio.service...
Feb 12 21:58:56.349466 systemd-networkd[1103]: eth0: Link UP
Feb 12 21:58:56.350491 systemd-networkd[1103]: eth0: Gained carrier
Feb 12 21:58:56.353485 systemd[1]: Started iscsiuio.service.
Feb 12 21:58:56.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:56.356284 systemd[1]: Starting iscsid.service...
Feb 12 21:58:56.361154 iscsid[1108]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Feb 12 21:58:56.361154 iscsid[1108]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Feb 12 21:58:56.361154 iscsid[1108]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Feb 12 21:58:56.361154 iscsid[1108]: If using hardware iscsi like qla4xxx this message can be ignored.
Feb 12 21:58:56.371080 iscsid[1108]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Feb 12 21:58:56.371080 iscsid[1108]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Feb 12 21:58:56.373818 systemd[1]: Started iscsid.service.
Feb 12 21:58:56.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:56.377103 systemd[1]: Starting dracut-initqueue.service...
Feb 12 21:58:56.381080 systemd-networkd[1103]: eth0: DHCPv4 address 172.31.17.60/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 12 21:58:56.391435 systemd[1]: Finished dracut-initqueue.service.
Feb 12 21:58:56.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:56.392758 systemd[1]: Reached target remote-fs-pre.target.
Feb 12 21:58:56.393932 systemd[1]: Reached target remote-cryptsetup.target.
Feb 12 21:58:56.395021 systemd[1]: Reached target remote-fs.target.
Feb 12 21:58:56.397792 systemd[1]: Starting dracut-pre-mount.service...
Feb 12 21:58:56.406793 systemd[1]: Finished dracut-pre-mount.service.
Feb 12 21:58:56.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:56.821393 ignition[1081]: Ignition 2.14.0
Feb 12 21:58:56.821407 ignition[1081]: Stage: fetch-offline
Feb 12 21:58:56.821540 ignition[1081]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:58:56.821582 ignition[1081]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:58:56.836423 ignition[1081]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:58:56.836932 ignition[1081]: Ignition finished successfully
Feb 12 21:58:56.839945 systemd[1]: Finished ignition-fetch-offline.service.
Feb 12 21:58:56.842222 systemd[1]: Starting ignition-fetch.service...
Feb 12 21:58:56.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:56.850608 ignition[1127]: Ignition 2.14.0
Feb 12 21:58:56.850617 ignition[1127]: Stage: fetch
Feb 12 21:58:56.850761 ignition[1127]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:58:56.850783 ignition[1127]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:58:56.857818 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:58:56.859233 ignition[1127]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:58:56.878458 ignition[1127]: INFO     : PUT result: OK
Feb 12 21:58:56.885045 ignition[1127]: DEBUG    : parsed url from cmdline: ""
Feb 12 21:58:56.885045 ignition[1127]: INFO     : no config URL provided
Feb 12 21:58:56.885045 ignition[1127]: INFO     : reading system config file "/usr/lib/ignition/user.ign"
Feb 12 21:58:56.896862 ignition[1127]: INFO     : no config at "/usr/lib/ignition/user.ign"
Feb 12 21:58:56.896862 ignition[1127]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:58:56.899623 ignition[1127]: INFO     : PUT result: OK
Feb 12 21:58:56.899623 ignition[1127]: INFO     : GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Feb 12 21:58:56.902341 ignition[1127]: INFO     : GET result: OK
Feb 12 21:58:56.903554 ignition[1127]: DEBUG    : parsing config with SHA512: 17fad5805c0c0d4cee3d2f9c0dfe04b0484178cdc380d838b7b125805fc6771240b7aee14800730f00243a9ae18c56d7089ebacf9813b348f6cbc658795a6a70
Feb 12 21:58:56.926242 unknown[1127]: fetched base config from "system"
Feb 12 21:58:56.926419 unknown[1127]: fetched base config from "system"
Feb 12 21:58:56.927551 unknown[1127]: fetched user config from "aws"
Feb 12 21:58:56.933478 ignition[1127]: fetch: fetch complete
Feb 12 21:58:56.933491 ignition[1127]: fetch: fetch passed
Feb 12 21:58:56.933549 ignition[1127]: Ignition finished successfully
Feb 12 21:58:56.935499 systemd[1]: Finished ignition-fetch.service.
Feb 12 21:58:56.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:56.938481 systemd[1]: Starting ignition-kargs.service...
Feb 12 21:58:56.948939 ignition[1133]: Ignition 2.14.0
Feb 12 21:58:56.948950 ignition[1133]: Stage: kargs
Feb 12 21:58:56.949155 ignition[1133]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:58:56.949943 ignition[1133]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:58:56.961454 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:58:56.962912 ignition[1133]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:58:56.966707 ignition[1133]: INFO     : PUT result: OK
Feb 12 21:58:56.968196 ignition[1133]: kargs: kargs passed
Feb 12 21:58:56.968255 ignition[1133]: Ignition finished successfully
Feb 12 21:58:56.970892 systemd[1]: Finished ignition-kargs.service.
Feb 12 21:58:56.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:56.973080 systemd[1]: Starting ignition-disks.service...
Feb 12 21:58:56.981579 ignition[1139]: Ignition 2.14.0
Feb 12 21:58:56.981591 ignition[1139]: Stage: disks
Feb 12 21:58:56.981743 ignition[1139]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:58:56.981767 ignition[1139]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:58:56.988528 ignition[1139]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:58:56.989967 ignition[1139]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:58:56.992217 ignition[1139]: INFO     : PUT result: OK
Feb 12 21:58:56.995312 ignition[1139]: disks: disks passed
Feb 12 21:58:56.995369 ignition[1139]: Ignition finished successfully
Feb 12 21:58:56.998210 systemd[1]: Finished ignition-disks.service.
Feb 12 21:58:56.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:56.999435 systemd[1]: Reached target initrd-root-device.target.
Feb 12 21:58:57.001634 systemd[1]: Reached target local-fs-pre.target.
Feb 12 21:58:57.004747 systemd[1]: Reached target local-fs.target.
Feb 12 21:58:57.004815 systemd[1]: Reached target sysinit.target.
Feb 12 21:58:57.007450 systemd[1]: Reached target basic.target.
Feb 12 21:58:57.010353 systemd[1]: Starting systemd-fsck-root.service...
Feb 12 21:58:57.040982 systemd-fsck[1147]: ROOT: clean, 602/553520 files, 56013/553472 blocks
Feb 12 21:58:57.045933 systemd[1]: Finished systemd-fsck-root.service.
Feb 12 21:58:57.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:57.048602 systemd[1]: Mounting sysroot.mount...
Feb 12 21:58:57.063902 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Feb 12 21:58:57.064620 systemd[1]: Mounted sysroot.mount.
Feb 12 21:58:57.066826 systemd[1]: Reached target initrd-root-fs.target.
Feb 12 21:58:57.081853 systemd[1]: Mounting sysroot-usr.mount...
Feb 12 21:58:57.084617 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Feb 12 21:58:57.084784 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 12 21:58:57.084820 systemd[1]: Reached target ignition-diskful.target.
Feb 12 21:58:57.094167 systemd[1]: Mounted sysroot-usr.mount.
Feb 12 21:58:57.106140 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb 12 21:58:57.109834 systemd[1]: Starting initrd-setup-root.service...
Feb 12 21:58:57.116519 initrd-setup-root[1169]: cut: /sysroot/etc/passwd: No such file or directory
Feb 12 21:58:57.123891 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1164)
Feb 12 21:58:57.127356 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 12 21:58:57.127590 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 12 21:58:57.127622 kernel: BTRFS info (device nvme0n1p6): has skinny extents
Feb 12 21:58:57.131409 initrd-setup-root[1193]: cut: /sysroot/etc/group: No such file or directory
Feb 12 21:58:57.135938 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 12 21:58:57.148736 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb 12 21:58:57.153843 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory
Feb 12 21:58:57.158923 initrd-setup-root[1211]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 12 21:58:57.311590 systemd[1]: Finished initrd-setup-root.service.
Feb 12 21:58:57.319561 kernel: kauditd_printk_skb: 23 callbacks suppressed
Feb 12 21:58:57.319595 kernel: audit: type=1130 audit(1707775137.309:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:57.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:57.312902 systemd[1]: Starting ignition-mount.service...
Feb 12 21:58:57.322987 systemd[1]: Starting sysroot-boot.service...
Feb 12 21:58:57.328064 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully.
Feb 12 21:58:57.328258 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully.
Feb 12 21:58:57.355220 systemd[1]: Finished sysroot-boot.service.
Feb 12 21:58:57.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:57.362895 kernel: audit: type=1130 audit(1707775137.356:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:57.363036 ignition[1232]: INFO     : Ignition 2.14.0
Feb 12 21:58:57.363036 ignition[1232]: INFO     : Stage: mount
Feb 12 21:58:57.365199 ignition[1232]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:58:57.365199 ignition[1232]: DEBUG    : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:58:57.377966 ignition[1232]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:58:57.379535 ignition[1232]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:58:57.381343 ignition[1232]: INFO     : PUT result: OK
Feb 12 21:58:57.384447 ignition[1232]: INFO     : mount: mount passed
Feb 12 21:58:57.385472 ignition[1232]: INFO     : Ignition finished successfully
Feb 12 21:58:57.385544 systemd[1]: Finished ignition-mount.service.
Feb 12 21:58:57.393388 kernel: audit: type=1130 audit(1707775137.386:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:57.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:57.388332 systemd[1]: Starting ignition-files.service...
Feb 12 21:58:57.400077 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb 12 21:58:57.419904 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1239)
Feb 12 21:58:57.419964 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 12 21:58:57.422699 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 12 21:58:57.422728 kernel: BTRFS info (device nvme0n1p6): has skinny extents
Feb 12 21:58:57.430896 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 12 21:58:57.434319 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb 12 21:58:57.447112 ignition[1258]: INFO     : Ignition 2.14.0
Feb 12 21:58:57.447112 ignition[1258]: INFO     : Stage: files
Feb 12 21:58:57.449283 ignition[1258]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:58:57.449283 ignition[1258]: DEBUG    : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:58:57.458007 ignition[1258]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:58:57.459439 ignition[1258]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:58:57.461179 ignition[1258]: INFO     : PUT result: OK
Feb 12 21:58:57.464847 ignition[1258]: DEBUG    : files: compiled without relabeling support, skipping
Feb 12 21:58:57.468923 ignition[1258]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 12 21:58:57.470513 ignition[1258]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 12 21:58:57.483802 ignition[1258]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 12 21:58:57.485506 ignition[1258]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 12 21:58:57.487748 unknown[1258]: wrote ssh authorized keys file for user: core
Feb 12 21:58:57.489227 ignition[1258]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 12 21:58:57.492813 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz"
Feb 12 21:58:57.495208 ignition[1258]: INFO     : GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1
Feb 12 21:58:57.961071 systemd-networkd[1103]: eth0: Gained IPv6LL
Feb 12 21:58:57.986180 ignition[1258]: INFO     : GET result: OK
Feb 12 21:58:58.274578 ignition[1258]: DEBUG    : file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540
Feb 12 21:58:58.278459 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz"
Feb 12 21:58:58.278459 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz"
Feb 12 21:58:58.278459 ignition[1258]: INFO     : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1
Feb 12 21:58:58.675217 ignition[1258]: INFO     : GET result: OK
Feb 12 21:58:58.830410 ignition[1258]: DEBUG    : file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a
Feb 12 21:58:58.833645 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz"
Feb 12 21:58:58.833645 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/etc/eks/bootstrap.sh"
Feb 12 21:58:58.838828 ignition[1258]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Feb 12 21:58:58.841594 ignition[1258]: INFO     : op(1): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3651360633"
Feb 12 21:58:58.845451 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1261)
Feb 12 21:58:58.845493 ignition[1258]: CRITICAL : op(1): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem3651360633": device or resource busy
Feb 12 21:58:58.845493 ignition[1258]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3651360633", trying btrfs: device or resource busy
Feb 12 21:58:58.845493 ignition[1258]: INFO     : op(2): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3651360633"
Feb 12 21:58:58.851944 ignition[1258]: INFO     : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3651360633"
Feb 12 21:58:58.863820 ignition[1258]: INFO     : op(3): [started]  unmounting "/mnt/oem3651360633"
Feb 12 21:58:58.865439 ignition[1258]: INFO     : op(3): [finished] unmounting "/mnt/oem3651360633"
Feb 12 21:58:58.865439 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh"
Feb 12 21:58:58.869395 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/bin/kubeadm"
Feb 12 21:58:58.869395 ignition[1258]: INFO     : GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1
Feb 12 21:58:58.865480 systemd[1]: mnt-oem3651360633.mount: Deactivated successfully.
Feb 12 21:58:58.992770 ignition[1258]: INFO     : GET result: OK
Feb 12 21:58:59.346083 ignition[1258]: DEBUG    : file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1
Feb 12 21:58:59.349071 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm"
Feb 12 21:58:59.349071 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/opt/bin/kubelet"
Feb 12 21:58:59.349071 ignition[1258]: INFO     : GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1
Feb 12 21:58:59.418436 ignition[1258]: INFO     : GET result: OK
Feb 12 21:59:00.206790 ignition[1258]: DEBUG    : file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75
Feb 12 21:59:00.209748 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet"
Feb 12 21:59:00.209748 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/install.sh"
Feb 12 21:59:00.209748 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh"
Feb 12 21:59:00.209748 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/docker/daemon.json"
Feb 12 21:59:00.209748 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json"
Feb 12 21:59:00.220288 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 12 21:59:00.220288 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 12 21:59:00.220288 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json"
Feb 12 21:59:00.227832 ignition[1258]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Feb 12 21:59:00.233222 ignition[1258]: INFO     : op(4): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem368049467"
Feb 12 21:59:00.235264 ignition[1258]: CRITICAL : op(4): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem368049467": device or resource busy
Feb 12 21:59:00.235264 ignition[1258]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem368049467", trying btrfs: device or resource busy
Feb 12 21:59:00.235264 ignition[1258]: INFO     : op(5): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem368049467"
Feb 12 21:59:00.251476 ignition[1258]: INFO     : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem368049467"
Feb 12 21:59:00.251476 ignition[1258]: INFO     : op(6): [started]  unmounting "/mnt/oem368049467"
Feb 12 21:59:00.251476 ignition[1258]: INFO     : op(6): [finished] unmounting "/mnt/oem368049467"
Feb 12 21:59:00.251476 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json"
Feb 12 21:59:00.251476 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [started]  writing file "/sysroot/etc/amazon/ssm/seelog.xml"
Feb 12 21:59:00.251476 ignition[1258]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Feb 12 21:59:00.254841 systemd[1]: mnt-oem368049467.mount: Deactivated successfully.
Feb 12 21:59:00.273626 ignition[1258]: INFO     : op(7): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem978561850"
Feb 12 21:59:00.275695 ignition[1258]: CRITICAL : op(7): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem978561850": device or resource busy
Feb 12 21:59:00.275695 ignition[1258]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem978561850", trying btrfs: device or resource busy
Feb 12 21:59:00.275695 ignition[1258]: INFO     : op(8): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem978561850"
Feb 12 21:59:00.281995 ignition[1258]: INFO     : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem978561850"
Feb 12 21:59:00.281995 ignition[1258]: INFO     : op(9): [started]  unmounting "/mnt/oem978561850"
Feb 12 21:59:00.281995 ignition[1258]: INFO     : op(9): [finished] unmounting "/mnt/oem978561850"
Feb 12 21:59:00.281995 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml"
Feb 12 21:59:00.281995 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [started]  writing file "/sysroot/etc/systemd/system/nvidia.service"
Feb 12 21:59:00.281995 ignition[1258]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Feb 12 21:59:00.280810 systemd[1]: mnt-oem978561850.mount: Deactivated successfully.
Feb 12 21:59:00.300719 ignition[1258]: INFO     : op(a): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem45043848"
Feb 12 21:59:00.302942 ignition[1258]: CRITICAL : op(a): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem45043848": device or resource busy
Feb 12 21:59:00.302942 ignition[1258]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem45043848", trying btrfs: device or resource busy
Feb 12 21:59:00.302942 ignition[1258]: INFO     : op(b): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem45043848"
Feb 12 21:59:00.310103 ignition[1258]: INFO     : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem45043848"
Feb 12 21:59:00.311895 ignition[1258]: INFO     : op(c): [started]  unmounting "/mnt/oem45043848"
Feb 12 21:59:00.314606 ignition[1258]: INFO     : op(c): [finished] unmounting "/mnt/oem45043848"
Feb 12 21:59:00.315940 ignition[1258]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service"
Feb 12 21:59:00.315940 ignition[1258]: INFO     : files: op(e): [started]  processing unit "coreos-metadata-sshkeys@.service"
Feb 12 21:59:00.315940 ignition[1258]: INFO     : files: op(e): [finished] processing unit "coreos-metadata-sshkeys@.service"
Feb 12 21:59:00.315940 ignition[1258]: INFO     : files: op(f): [started]  processing unit "amazon-ssm-agent.service"
Feb 12 21:59:00.315940 ignition[1258]: INFO     : files: op(f): op(10): [started]  writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service"
Feb 12 21:59:00.327974 ignition[1258]: INFO     : files: op(f): op(10): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service"
Feb 12 21:59:00.327974 ignition[1258]: INFO     : files: op(f): [finished] processing unit "amazon-ssm-agent.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(11): [started]  processing unit "nvidia.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(11): [finished] processing unit "nvidia.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(12): [started]  processing unit "prepare-cni-plugins.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(12): op(13): [started]  writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(12): [finished] processing unit "prepare-cni-plugins.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(14): [started]  processing unit "prepare-critools.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(14): op(15): [started]  writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(14): [finished] processing unit "prepare-critools.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(16): [started]  setting preset to enabled for "amazon-ssm-agent.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(16): [finished] setting preset to enabled for "amazon-ssm-agent.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(17): [started]  setting preset to enabled for "nvidia.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(17): [finished] setting preset to enabled for "nvidia.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(18): [started]  setting preset to enabled for "prepare-cni-plugins.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(19): [started]  setting preset to enabled for "prepare-critools.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(19): [finished] setting preset to enabled for "prepare-critools.service"
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(1a): [started]  setting preset to enabled for "coreos-metadata-sshkeys@.service "
Feb 12 21:59:00.334062 ignition[1258]: INFO     : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service "
Feb 12 21:59:00.376509 ignition[1258]: INFO     : files: createResultFile: createFiles: op(1b): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 12 21:59:00.376509 ignition[1258]: INFO     : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 12 21:59:00.376509 ignition[1258]: INFO     : files: files passed
Feb 12 21:59:00.376509 ignition[1258]: INFO     : Ignition finished successfully
Feb 12 21:59:00.387331 kernel: audit: type=1130 audit(1707775140.374:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.368752 systemd[1]: Finished ignition-files.service.
Feb 12 21:59:00.394254 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Feb 12 21:59:00.395443 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Feb 12 21:59:00.414804 kernel: audit: type=1130 audit(1707775140.400:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.414842 kernel: audit: type=1131 audit(1707775140.400:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.396300 systemd[1]: Starting ignition-quench.service...
Feb 12 21:59:00.399798 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 12 21:59:00.399913 systemd[1]: Finished ignition-quench.service.
Feb 12 21:59:00.421762 initrd-setup-root-after-ignition[1284]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 12 21:59:00.424387 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Feb 12 21:59:00.424603 systemd[1]: Reached target ignition-complete.target.
Feb 12 21:59:00.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.434894 kernel: audit: type=1130 audit(1707775140.422:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.434865 systemd[1]: Starting initrd-parse-etc.service...
Feb 12 21:59:00.459763 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 12 21:59:00.459921 systemd[1]: Finished initrd-parse-etc.service.
Feb 12 21:59:00.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.462318 systemd[1]: Reached target initrd-fs.target.
Feb 12 21:59:00.471635 kernel: audit: type=1130 audit(1707775140.460:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.471660 kernel: audit: type=1131 audit(1707775140.460:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.473419 systemd[1]: Reached target initrd.target.
Feb 12 21:59:00.475366 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Feb 12 21:59:00.479584 systemd[1]: Starting dracut-pre-pivot.service...
Feb 12 21:59:00.492721 systemd[1]: Finished dracut-pre-pivot.service.
Feb 12 21:59:00.496112 systemd[1]: Starting initrd-cleanup.service...
Feb 12 21:59:00.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.502895 kernel: audit: type=1130 audit(1707775140.491:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.509821 systemd[1]: Stopped target nss-lookup.target.
Feb 12 21:59:00.510073 systemd[1]: Stopped target remote-cryptsetup.target.
Feb 12 21:59:00.515156 systemd[1]: Stopped target timers.target.
Feb 12 21:59:00.517626 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 12 21:59:00.518801 systemd[1]: Stopped dracut-pre-pivot.service.
Feb 12 21:59:00.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.521226 systemd[1]: Stopped target initrd.target.
Feb 12 21:59:00.523741 systemd[1]: Stopped target basic.target.
Feb 12 21:59:00.526176 systemd[1]: Stopped target ignition-complete.target.
Feb 12 21:59:00.528657 systemd[1]: Stopped target ignition-diskful.target.
Feb 12 21:59:00.532353 systemd[1]: Stopped target initrd-root-device.target.
Feb 12 21:59:00.535812 systemd[1]: Stopped target remote-fs.target.
Feb 12 21:59:00.537861 systemd[1]: Stopped target remote-fs-pre.target.
Feb 12 21:59:00.540666 systemd[1]: Stopped target sysinit.target.
Feb 12 21:59:00.542592 systemd[1]: Stopped target local-fs.target.
Feb 12 21:59:00.544754 systemd[1]: Stopped target local-fs-pre.target.
Feb 12 21:59:00.545956 systemd[1]: Stopped target swap.target.
Feb 12 21:59:00.548950 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 12 21:59:00.549997 systemd[1]: Stopped dracut-pre-mount.service.
Feb 12 21:59:00.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.553252 systemd[1]: Stopped target cryptsetup.target.
Feb 12 21:59:00.555347 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 12 21:59:00.556797 systemd[1]: Stopped dracut-initqueue.service.
Feb 12 21:59:00.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.559098 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 12 21:59:00.560665 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Feb 12 21:59:00.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.563256 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 12 21:59:00.563351 systemd[1]: Stopped ignition-files.service.
Feb 12 21:59:00.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.597570 iscsid[1108]: iscsid shutting down.
Feb 12 21:59:00.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.572143 systemd[1]: Stopping ignition-mount.service...
Feb 12 21:59:00.583014 systemd[1]: Stopping iscsid.service...
Feb 12 21:59:00.585118 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 12 21:59:00.586501 systemd[1]: Stopped kmod-static-nodes.service.
Feb 12 21:59:00.595927 systemd[1]: Stopping sysroot-boot.service...
Feb 12 21:59:00.599307 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 12 21:59:00.599595 systemd[1]: Stopped systemd-udev-trigger.service.
Feb 12 21:59:00.601238 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 12 21:59:00.601407 systemd[1]: Stopped dracut-pre-trigger.service.
Feb 12 21:59:00.633006 ignition[1297]: INFO     : Ignition 2.14.0
Feb 12 21:59:00.633006 ignition[1297]: INFO     : Stage: umount
Feb 12 21:59:00.633006 ignition[1297]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:59:00.633006 ignition[1297]: DEBUG    : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:59:00.605259 systemd[1]: iscsid.service: Deactivated successfully.
Feb 12 21:59:00.605455 systemd[1]: Stopped iscsid.service.
Feb 12 21:59:00.619829 systemd[1]: Stopping iscsiuio.service...
Feb 12 21:59:00.646053 ignition[1297]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:59:00.647622 ignition[1297]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:59:00.650115 ignition[1297]: INFO     : PUT result: OK
Feb 12 21:59:00.650079 systemd[1]: iscsiuio.service: Deactivated successfully.
Feb 12 21:59:00.650342 systemd[1]: Stopped iscsiuio.service.
Feb 12 21:59:00.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.654364 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 12 21:59:00.654520 systemd[1]: Finished initrd-cleanup.service.
Feb 12 21:59:00.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.659173 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 12 21:59:00.659290 systemd[1]: Stopped sysroot-boot.service.
Feb 12 21:59:00.671240 ignition[1297]: INFO     : umount: umount passed
Feb 12 21:59:00.672919 ignition[1297]: INFO     : Ignition finished successfully
Feb 12 21:59:00.678392 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 12 21:59:00.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.678525 systemd[1]: Stopped ignition-mount.service.
Feb 12 21:59:00.681761 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 12 21:59:00.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.681836 systemd[1]: Stopped ignition-disks.service.
Feb 12 21:59:00.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.683819 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 12 21:59:00.683893 systemd[1]: Stopped ignition-kargs.service.
Feb 12 21:59:00.685721 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 12 21:59:00.685763 systemd[1]: Stopped ignition-fetch.service.
Feb 12 21:59:00.687824 systemd[1]: Stopped target network.target.
Feb 12 21:59:00.688934 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 12 21:59:00.688984 systemd[1]: Stopped ignition-fetch-offline.service.
Feb 12 21:59:00.689072 systemd[1]: Stopped target paths.target.
Feb 12 21:59:00.703772 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 12 21:59:00.705506 systemd[1]: Stopped systemd-ask-password-console.path.
Feb 12 21:59:00.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.707432 systemd[1]: Stopped target slices.target.
Feb 12 21:59:00.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.709253 systemd[1]: Stopped target sockets.target.
Feb 12 21:59:00.711667 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 12 21:59:00.713022 systemd[1]: Closed iscsid.socket.
Feb 12 21:59:00.714719 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 12 21:59:00.714759 systemd[1]: Closed iscsiuio.socket.
Feb 12 21:59:00.716901 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 12 21:59:00.716953 systemd[1]: Stopped ignition-setup.service.
Feb 12 21:59:00.717041 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 12 21:59:00.717071 systemd[1]: Stopped initrd-setup-root.service.
Feb 12 21:59:00.720140 systemd[1]: Stopping systemd-networkd.service...
Feb 12 21:59:00.724993 systemd[1]: Stopping systemd-resolved.service...
Feb 12 21:59:00.726826 systemd-networkd[1103]: eth0: DHCPv6 lease lost
Feb 12 21:59:00.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.735216 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 12 21:59:00.735322 systemd[1]: Stopped systemd-resolved.service.
Feb 12 21:59:00.738562 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 12 21:59:00.738657 systemd[1]: Stopped systemd-networkd.service.
Feb 12 21:59:00.744213 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 12 21:59:00.744253 systemd[1]: Closed systemd-networkd.socket.
Feb 12 21:59:00.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.745000 audit: BPF prog-id=6 op=UNLOAD
Feb 12 21:59:00.745000 audit: BPF prog-id=9 op=UNLOAD
Feb 12 21:59:00.747422 systemd[1]: Stopping network-cleanup.service...
Feb 12 21:59:00.750161 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 12 21:59:00.750231 systemd[1]: Stopped parse-ip-for-networkd.service.
Feb 12 21:59:00.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.754079 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 12 21:59:00.754130 systemd[1]: Stopped systemd-sysctl.service.
Feb 12 21:59:00.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.758985 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 12 21:59:00.760261 systemd[1]: Stopped systemd-modules-load.service.
Feb 12 21:59:00.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.762532 systemd[1]: Stopping systemd-udevd.service...
Feb 12 21:59:00.765767 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 12 21:59:00.766976 systemd[1]: Stopped systemd-udevd.service.
Feb 12 21:59:00.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.769259 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 12 21:59:00.769303 systemd[1]: Closed systemd-udevd-control.socket.
Feb 12 21:59:00.773309 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 12 21:59:00.773368 systemd[1]: Closed systemd-udevd-kernel.socket.
Feb 12 21:59:00.775752 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 12 21:59:00.775804 systemd[1]: Stopped dracut-pre-udev.service.
Feb 12 21:59:00.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.780770 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 12 21:59:00.780831 systemd[1]: Stopped dracut-cmdline.service.
Feb 12 21:59:00.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.783540 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 12 21:59:00.783587 systemd[1]: Stopped dracut-cmdline-ask.service.
Feb 12 21:59:00.787645 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Feb 12 21:59:00.790966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 12 21:59:00.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.791081 systemd[1]: Stopped systemd-vconsole-setup.service.
Feb 12 21:59:00.792739 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 12 21:59:00.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:00.792826 systemd[1]: Stopped network-cleanup.service.
Feb 12 21:59:00.796141 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 12 21:59:00.796227 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Feb 12 21:59:00.798575 systemd[1]: Reached target initrd-switch-root.target.
Feb 12 21:59:00.806332 systemd[1]: Starting initrd-switch-root.service...
Feb 12 21:59:00.826138 systemd[1]: Switching root.
Feb 12 21:59:00.848945 systemd-journald[185]: Journal stopped
Feb 12 21:59:06.153375 systemd-journald[185]: Received SIGTERM from PID 1 (systemd).
Feb 12 21:59:06.153462 kernel: SELinux:  Class mctp_socket not defined in policy.
Feb 12 21:59:06.153489 kernel: SELinux:  Class anon_inode not defined in policy.
Feb 12 21:59:06.153512 kernel: SELinux: the above unknown classes and permissions will be allowed
Feb 12 21:59:06.153535 kernel: SELinux:  policy capability network_peer_controls=1
Feb 12 21:59:06.153552 kernel: SELinux:  policy capability open_perms=1
Feb 12 21:59:06.153569 kernel: SELinux:  policy capability extended_socket_class=1
Feb 12 21:59:06.153590 kernel: SELinux:  policy capability always_check_network=0
Feb 12 21:59:06.153607 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 12 21:59:06.153960 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 12 21:59:06.153985 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 12 21:59:06.154008 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 12 21:59:06.154029 systemd[1]: Successfully loaded SELinux policy in 117.202ms.
Feb 12 21:59:06.154059 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.132ms.
Feb 12 21:59:06.154081 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 12 21:59:06.154100 systemd[1]: Detected virtualization amazon.
Feb 12 21:59:06.154119 systemd[1]: Detected architecture x86-64.
Feb 12 21:59:06.154746 systemd[1]: Detected first boot.
Feb 12 21:59:06.154770 systemd[1]: Initializing machine ID from VM UUID.
Feb 12 21:59:06.154788 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Feb 12 21:59:06.154815 systemd[1]: Populated /etc with preset unit settings.
Feb 12 21:59:06.154837 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 21:59:06.154863 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 21:59:06.154915 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 21:59:06.154936 kernel: kauditd_printk_skb: 51 callbacks suppressed
Feb 12 21:59:06.154956 kernel: audit: type=1334 audit(1707775145.840:88): prog-id=12 op=LOAD
Feb 12 21:59:06.154975 kernel: audit: type=1334 audit(1707775145.840:89): prog-id=3 op=UNLOAD
Feb 12 21:59:06.154998 kernel: audit: type=1334 audit(1707775145.842:90): prog-id=13 op=LOAD
Feb 12 21:59:06.155014 kernel: audit: type=1334 audit(1707775145.843:91): prog-id=14 op=LOAD
Feb 12 21:59:06.155031 kernel: audit: type=1334 audit(1707775145.843:92): prog-id=4 op=UNLOAD
Feb 12 21:59:06.161414 kernel: audit: type=1334 audit(1707775145.843:93): prog-id=5 op=UNLOAD
Feb 12 21:59:06.161456 kernel: audit: type=1334 audit(1707775145.845:94): prog-id=15 op=LOAD
Feb 12 21:59:06.161477 kernel: audit: type=1334 audit(1707775145.845:95): prog-id=12 op=UNLOAD
Feb 12 21:59:06.161497 kernel: audit: type=1334 audit(1707775145.848:96): prog-id=16 op=LOAD
Feb 12 21:59:06.161588 kernel: audit: type=1334 audit(1707775145.851:97): prog-id=17 op=LOAD
Feb 12 21:59:06.161617 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 12 21:59:06.161643 systemd[1]: Stopped initrd-switch-root.service.
Feb 12 21:59:06.161662 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 12 21:59:06.161686 systemd[1]: Created slice system-addon\x2dconfig.slice.
Feb 12 21:59:06.161706 systemd[1]: Created slice system-addon\x2drun.slice.
Feb 12 21:59:06.161730 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice.
Feb 12 21:59:06.161753 systemd[1]: Created slice system-getty.slice.
Feb 12 21:59:06.161775 systemd[1]: Created slice system-modprobe.slice.
Feb 12 21:59:06.161795 systemd[1]: Created slice system-serial\x2dgetty.slice.
Feb 12 21:59:06.161834 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Feb 12 21:59:06.161855 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Feb 12 21:59:06.161929 systemd[1]: Created slice user.slice.
Feb 12 21:59:06.161949 systemd[1]: Started systemd-ask-password-console.path.
Feb 12 21:59:06.161967 systemd[1]: Started systemd-ask-password-wall.path.
Feb 12 21:59:06.161985 systemd[1]: Set up automount boot.automount.
Feb 12 21:59:06.163103 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Feb 12 21:59:06.163139 systemd[1]: Stopped target initrd-switch-root.target.
Feb 12 21:59:06.163158 systemd[1]: Stopped target initrd-fs.target.
Feb 12 21:59:06.163176 systemd[1]: Stopped target initrd-root-fs.target.
Feb 12 21:59:06.163194 systemd[1]: Reached target integritysetup.target.
Feb 12 21:59:06.163213 systemd[1]: Reached target remote-cryptsetup.target.
Feb 12 21:59:06.163232 systemd[1]: Reached target remote-fs.target.
Feb 12 21:59:06.163253 systemd[1]: Reached target slices.target.
Feb 12 21:59:06.163272 systemd[1]: Reached target swap.target.
Feb 12 21:59:06.163291 systemd[1]: Reached target torcx.target.
Feb 12 21:59:06.163313 systemd[1]: Reached target veritysetup.target.
Feb 12 21:59:06.163333 systemd[1]: Listening on systemd-coredump.socket.
Feb 12 21:59:06.163355 systemd[1]: Listening on systemd-initctl.socket.
Feb 12 21:59:06.163375 systemd[1]: Listening on systemd-networkd.socket.
Feb 12 21:59:06.163395 systemd[1]: Listening on systemd-udevd-control.socket.
Feb 12 21:59:06.163412 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb 12 21:59:06.163430 systemd[1]: Listening on systemd-userdbd.socket.
Feb 12 21:59:06.163467 systemd[1]: Mounting dev-hugepages.mount...
Feb 12 21:59:06.163486 systemd[1]: Mounting dev-mqueue.mount...
Feb 12 21:59:06.163509 systemd[1]: Mounting media.mount...
Feb 12 21:59:06.163528 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 12 21:59:06.163547 systemd[1]: Mounting sys-kernel-debug.mount...
Feb 12 21:59:06.163566 systemd[1]: Mounting sys-kernel-tracing.mount...
Feb 12 21:59:06.163594 systemd[1]: Mounting tmp.mount...
Feb 12 21:59:06.163617 systemd[1]: Starting flatcar-tmpfiles.service...
Feb 12 21:59:06.163657 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Feb 12 21:59:06.163677 systemd[1]: Starting kmod-static-nodes.service...
Feb 12 21:59:06.163695 systemd[1]: Starting modprobe@configfs.service...
Feb 12 21:59:06.163714 systemd[1]: Starting modprobe@dm_mod.service...
Feb 12 21:59:06.163732 systemd[1]: Starting modprobe@drm.service...
Feb 12 21:59:06.163844 systemd[1]: Starting modprobe@efi_pstore.service...
Feb 12 21:59:06.163865 systemd[1]: Starting modprobe@fuse.service...
Feb 12 21:59:06.163901 systemd[1]: Starting modprobe@loop.service...
Feb 12 21:59:06.163924 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 12 21:59:06.163943 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 12 21:59:06.163962 systemd[1]: Stopped systemd-fsck-root.service.
Feb 12 21:59:06.163981 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 12 21:59:06.164000 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 12 21:59:06.164018 systemd[1]: Stopped systemd-journald.service.
Feb 12 21:59:06.164037 systemd[1]: Starting systemd-journald.service...
Feb 12 21:59:06.164055 systemd[1]: Starting systemd-modules-load.service...
Feb 12 21:59:06.164074 systemd[1]: Starting systemd-network-generator.service...
Feb 12 21:59:06.164095 systemd[1]: Starting systemd-remount-fs.service...
Feb 12 21:59:06.164114 systemd[1]: Starting systemd-udev-trigger.service...
Feb 12 21:59:06.164133 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 12 21:59:06.164152 systemd[1]: Stopped verity-setup.service.
Feb 12 21:59:06.164172 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 12 21:59:06.164191 systemd[1]: Mounted dev-hugepages.mount.
Feb 12 21:59:06.164210 systemd[1]: Mounted dev-mqueue.mount.
Feb 12 21:59:06.164229 systemd[1]: Mounted media.mount.
Feb 12 21:59:06.164249 systemd[1]: Mounted sys-kernel-debug.mount.
Feb 12 21:59:06.164279 systemd[1]: Mounted sys-kernel-tracing.mount.
Feb 12 21:59:06.164298 systemd[1]: Mounted tmp.mount.
Feb 12 21:59:06.164317 systemd[1]: Finished kmod-static-nodes.service.
Feb 12 21:59:06.164337 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 12 21:59:06.164355 systemd[1]: Finished modprobe@configfs.service.
Feb 12 21:59:06.164374 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 12 21:59:06.164393 systemd[1]: Finished modprobe@dm_mod.service.
Feb 12 21:59:06.164412 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 12 21:59:06.164492 systemd[1]: Finished modprobe@drm.service.
Feb 12 21:59:06.164518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 12 21:59:06.164561 systemd[1]: Finished modprobe@efi_pstore.service.
Feb 12 21:59:06.164581 systemd[1]: Finished systemd-network-generator.service.
Feb 12 21:59:06.164599 systemd[1]: Finished systemd-remount-fs.service.
Feb 12 21:59:06.164624 systemd-journald[1408]: Journal started
Feb 12 21:59:06.164704 systemd-journald[1408]: Runtime Journal (/run/log/journal/ec293d392ada477961c42c9aa1975e2f) is 4.8M, max 38.7M, 33.9M free.
Feb 12 21:59:01.634000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 12 21:59:01.775000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb 12 21:59:01.775000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb 12 21:59:01.777000 audit: BPF prog-id=10 op=LOAD
Feb 12 21:59:01.777000 audit: BPF prog-id=10 op=UNLOAD
Feb 12 21:59:01.777000 audit: BPF prog-id=11 op=LOAD
Feb 12 21:59:01.777000 audit: BPF prog-id=11 op=UNLOAD
Feb 12 21:59:02.020000 audit[1332]: AVC avc:  denied  { associate } for  pid=1332 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Feb 12 21:59:02.020000 audit[1332]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178dc a1=c00002ae40 a2=c000029b00 a3=32 items=0 ppid=1315 pid=1332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 21:59:02.020000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Feb 12 21:59:02.023000 audit[1332]: AVC avc:  denied  { associate } for  pid=1332 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Feb 12 21:59:02.023000 audit[1332]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179b5 a2=1ed a3=0 items=2 ppid=1315 pid=1332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 21:59:02.023000 audit: CWD cwd="/"
Feb 12 21:59:02.023000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:02.023000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:02.023000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Feb 12 21:59:05.840000 audit: BPF prog-id=12 op=LOAD
Feb 12 21:59:05.840000 audit: BPF prog-id=3 op=UNLOAD
Feb 12 21:59:05.842000 audit: BPF prog-id=13 op=LOAD
Feb 12 21:59:05.843000 audit: BPF prog-id=14 op=LOAD
Feb 12 21:59:05.843000 audit: BPF prog-id=4 op=UNLOAD
Feb 12 21:59:05.843000 audit: BPF prog-id=5 op=UNLOAD
Feb 12 21:59:05.845000 audit: BPF prog-id=15 op=LOAD
Feb 12 21:59:05.845000 audit: BPF prog-id=12 op=UNLOAD
Feb 12 21:59:05.848000 audit: BPF prog-id=16 op=LOAD
Feb 12 21:59:05.851000 audit: BPF prog-id=17 op=LOAD
Feb 12 21:59:05.854000 audit: BPF prog-id=13 op=UNLOAD
Feb 12 21:59:05.854000 audit: BPF prog-id=14 op=UNLOAD
Feb 12 21:59:05.856000 audit: BPF prog-id=18 op=LOAD
Feb 12 21:59:05.856000 audit: BPF prog-id=15 op=UNLOAD
Feb 12 21:59:05.857000 audit: BPF prog-id=19 op=LOAD
Feb 12 21:59:05.857000 audit: BPF prog-id=20 op=LOAD
Feb 12 21:59:05.857000 audit: BPF prog-id=16 op=UNLOAD
Feb 12 21:59:05.858000 audit: BPF prog-id=17 op=UNLOAD
Feb 12 21:59:05.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:05.863000 audit: BPF prog-id=18 op=UNLOAD
Feb 12 21:59:06.169903 systemd[1]: Started systemd-journald.service.
Feb 12 21:59:05.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:05.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.070000 audit: BPF prog-id=21 op=LOAD
Feb 12 21:59:06.070000 audit: BPF prog-id=22 op=LOAD
Feb 12 21:59:06.070000 audit: BPF prog-id=23 op=LOAD
Feb 12 21:59:06.070000 audit: BPF prog-id=19 op=UNLOAD
Feb 12 21:59:06.070000 audit: BPF prog-id=20 op=UNLOAD
Feb 12 21:59:06.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.145000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Feb 12 21:59:06.145000 audit[1408]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff5fd4d450 a2=4000 a3=7fff5fd4d4ec items=0 ppid=1 pid=1408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 21:59:06.145000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Feb 12 21:59:06.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:05.836948 systemd[1]: Queued start job for default target multi-user.target.
Feb 12 21:59:02.009761 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb 12 21:59:05.860340 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 12 21:59:02.010330 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Feb 12 21:59:06.171723 systemd[1]: Reached target network-pre.target.
Feb 12 21:59:02.010358 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Feb 12 21:59:06.174556 systemd[1]: Mounting sys-kernel-config.mount...
Feb 12 21:59:02.010403 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12"
Feb 12 21:59:02.010419 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=debug msg="skipped missing lower profile" missing profile=oem
Feb 12 21:59:02.010463 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory"
Feb 12 21:59:02.010482 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)=
Feb 12 21:59:02.010746 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack
Feb 12 21:59:02.010794 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Feb 12 21:59:02.010812 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Feb 12 21:59:02.020866 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10
Feb 12 21:59:02.020941 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl
Feb 12 21:59:02.020982 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2
Feb 12 21:59:02.020999 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store
Feb 12 21:59:02.021017 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2
Feb 12 21:59:02.021031 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store
Feb 12 21:59:05.260499 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:05Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb 12 21:59:05.260755 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:05Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb 12 21:59:05.260926 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:05Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb 12 21:59:05.261175 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:05Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb 12 21:59:05.261246 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:05Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile=
Feb 12 21:59:05.261309 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-02-12T21:59:05Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx
Feb 12 21:59:06.178022 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 12 21:59:06.182050 kernel: loop: module loaded
Feb 12 21:59:06.185294 kernel: fuse: init (API version 7.34)
Feb 12 21:59:06.183661 systemd[1]: Starting systemd-hwdb-update.service...
Feb 12 21:59:06.186524 systemd[1]: Starting systemd-journal-flush.service...
Feb 12 21:59:06.187688 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 12 21:59:06.189720 systemd[1]: Starting systemd-random-seed.service...
Feb 12 21:59:06.193560 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 12 21:59:06.193786 systemd[1]: Finished modprobe@fuse.service.
Feb 12 21:59:06.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.195179 systemd[1]: Mounted sys-kernel-config.mount.
Feb 12 21:59:06.198323 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Feb 12 21:59:06.203774 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Feb 12 21:59:06.206625 systemd[1]: Finished systemd-modules-load.service.
Feb 12 21:59:06.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.208548 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 12 21:59:06.208729 systemd[1]: Finished modprobe@loop.service.
Feb 12 21:59:06.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.211079 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Feb 12 21:59:06.213189 systemd[1]: Starting systemd-sysctl.service...
Feb 12 21:59:06.224607 systemd-journald[1408]: Time spent on flushing to /var/log/journal/ec293d392ada477961c42c9aa1975e2f is 78.760ms for 1209 entries.
Feb 12 21:59:06.224607 systemd-journald[1408]: System Journal (/var/log/journal/ec293d392ada477961c42c9aa1975e2f) is 8.0M, max 195.6M, 187.6M free.
Feb 12 21:59:06.312454 systemd-journald[1408]: Received client request to flush runtime journal.
Feb 12 21:59:06.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.230860 systemd[1]: Finished systemd-random-seed.service.
Feb 12 21:59:06.232557 systemd[1]: Reached target first-boot-complete.target.
Feb 12 21:59:06.262090 systemd[1]: Finished systemd-sysctl.service.
Feb 12 21:59:06.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.318061 systemd[1]: Finished systemd-journal-flush.service.
Feb 12 21:59:06.320779 systemd[1]: Finished flatcar-tmpfiles.service.
Feb 12 21:59:06.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.323914 systemd[1]: Starting systemd-sysusers.service...
Feb 12 21:59:06.361681 systemd[1]: Finished systemd-udev-trigger.service.
Feb 12 21:59:06.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.364992 systemd[1]: Starting systemd-udev-settle.service...
Feb 12 21:59:06.377213 udevadm[1449]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Feb 12 21:59:06.455333 systemd[1]: Finished systemd-sysusers.service.
Feb 12 21:59:06.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.957757 systemd[1]: Finished systemd-hwdb-update.service.
Feb 12 21:59:06.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:06.958000 audit: BPF prog-id=24 op=LOAD
Feb 12 21:59:06.958000 audit: BPF prog-id=25 op=LOAD
Feb 12 21:59:06.958000 audit: BPF prog-id=7 op=UNLOAD
Feb 12 21:59:06.958000 audit: BPF prog-id=8 op=UNLOAD
Feb 12 21:59:06.960666 systemd[1]: Starting systemd-udevd.service...
Feb 12 21:59:06.980637 systemd-udevd[1450]: Using default interface naming scheme 'v252'.
Feb 12 21:59:07.025911 systemd[1]: Started systemd-udevd.service.
Feb 12 21:59:07.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:07.027000 audit: BPF prog-id=26 op=LOAD
Feb 12 21:59:07.030017 systemd[1]: Starting systemd-networkd.service...
Feb 12 21:59:07.037000 audit: BPF prog-id=27 op=LOAD
Feb 12 21:59:07.037000 audit: BPF prog-id=28 op=LOAD
Feb 12 21:59:07.038000 audit: BPF prog-id=29 op=LOAD
Feb 12 21:59:07.040802 systemd[1]: Starting systemd-userdbd.service...
Feb 12 21:59:07.099485 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped.
Feb 12 21:59:07.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:07.120864 systemd[1]: Started systemd-userdbd.service.
Feb 12 21:59:07.126931 (udev-worker)[1452]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 21:59:07.250957 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1451)
Feb 12 21:59:07.251074 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
Feb 12 21:59:07.268112 kernel: ACPI: button: Power Button [PWRF]
Feb 12 21:59:07.271939 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4
Feb 12 21:59:07.281907 kernel: ACPI: button: Sleep Button [SLPF]
Feb 12 21:59:07.293996 systemd-networkd[1456]: lo: Link UP
Feb 12 21:59:07.294009 systemd-networkd[1456]: lo: Gained carrier
Feb 12 21:59:07.294776 systemd-networkd[1456]: Enumeration completed
Feb 12 21:59:07.295047 systemd[1]: Started systemd-networkd.service.
Feb 12 21:59:07.295067 systemd-networkd[1456]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 12 21:59:07.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:07.298329 systemd[1]: Starting systemd-networkd-wait-online.service...
Feb 12 21:59:07.306693 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb 12 21:59:07.306258 systemd-networkd[1456]: eth0: Link UP
Feb 12 21:59:07.306427 systemd-networkd[1456]: eth0: Gained carrier
Feb 12 21:59:07.319123 systemd-networkd[1456]: eth0: DHCPv4 address 172.31.17.60/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 12 21:59:07.349000 audit[1452]: AVC avc:  denied  { confidentiality } for  pid=1452 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Feb 12 21:59:07.349000 audit[1452]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560740c0fa80 a1=32194 a2=7fcbf5307bc5 a3=5 items=108 ppid=1450 pid=1452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 21:59:07.349000 audit: CWD cwd="/"
Feb 12 21:59:07.349000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=1 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=2 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=3 name=(null) inode=15487 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=4 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=5 name=(null) inode=15488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=6 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=7 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=8 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=9 name=(null) inode=15490 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=10 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=11 name=(null) inode=15491 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=12 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=13 name=(null) inode=15492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=14 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=15 name=(null) inode=15493 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=16 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=17 name=(null) inode=15494 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=18 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=19 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=20 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=21 name=(null) inode=15496 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=22 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=23 name=(null) inode=15497 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=24 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=25 name=(null) inode=15498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=26 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=27 name=(null) inode=15499 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=28 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=29 name=(null) inode=15500 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=30 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=31 name=(null) inode=15501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=32 name=(null) inode=15501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=33 name=(null) inode=15502 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=34 name=(null) inode=15501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=35 name=(null) inode=15503 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=36 name=(null) inode=15501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=37 name=(null) inode=15504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=38 name=(null) inode=15501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=39 name=(null) inode=15505 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=40 name=(null) inode=15501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=41 name=(null) inode=15506 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=42 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=43 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=44 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=45 name=(null) inode=15508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=46 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=47 name=(null) inode=15509 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=48 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=49 name=(null) inode=15510 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=50 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=51 name=(null) inode=15511 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=52 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=53 name=(null) inode=15512 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=55 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=56 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=57 name=(null) inode=15514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=58 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=59 name=(null) inode=15515 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=60 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=61 name=(null) inode=15516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=62 name=(null) inode=15516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=63 name=(null) inode=15517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=64 name=(null) inode=15516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=65 name=(null) inode=15518 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=66 name=(null) inode=15516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=67 name=(null) inode=15519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=68 name=(null) inode=15516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=69 name=(null) inode=15520 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=70 name=(null) inode=15516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=71 name=(null) inode=15521 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=72 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=73 name=(null) inode=15522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=74 name=(null) inode=15522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=75 name=(null) inode=15523 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=76 name=(null) inode=15522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=77 name=(null) inode=15524 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=78 name=(null) inode=15522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=79 name=(null) inode=15525 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=80 name=(null) inode=15522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=81 name=(null) inode=15526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=82 name=(null) inode=15522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=83 name=(null) inode=15527 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=84 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=85 name=(null) inode=15528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=86 name=(null) inode=15528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=87 name=(null) inode=15529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=88 name=(null) inode=15528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=89 name=(null) inode=15530 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=90 name=(null) inode=15528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=91 name=(null) inode=15531 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=92 name=(null) inode=15528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=93 name=(null) inode=15532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=94 name=(null) inode=15528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=95 name=(null) inode=15533 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=96 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=97 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=98 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=99 name=(null) inode=15535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=100 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=101 name=(null) inode=15536 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=102 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=103 name=(null) inode=15537 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=104 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=105 name=(null) inode=15538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=106 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PATH item=107 name=(null) inode=15539 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:59:07.349000 audit: PROCTITLE proctitle="(udev-worker)"
Feb 12 21:59:07.428896 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255
Feb 12 21:59:07.446002 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5
Feb 12 21:59:07.451928 kernel: mousedev: PS/2 mouse device common for all mice
Feb 12 21:59:07.480256 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb 12 21:59:07.613359 systemd[1]: Finished systemd-udev-settle.service.
Feb 12 21:59:07.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:07.616169 systemd[1]: Starting lvm2-activation-early.service...
Feb 12 21:59:07.659437 lvm[1565]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 12 21:59:07.697202 systemd[1]: Finished lvm2-activation-early.service.
Feb 12 21:59:07.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:07.698512 systemd[1]: Reached target cryptsetup.target.
Feb 12 21:59:07.701269 systemd[1]: Starting lvm2-activation.service...
Feb 12 21:59:07.708379 lvm[1566]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 12 21:59:07.733682 systemd[1]: Finished lvm2-activation.service.
Feb 12 21:59:07.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:07.735348 systemd[1]: Reached target local-fs-pre.target.
Feb 12 21:59:07.736977 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 12 21:59:07.737004 systemd[1]: Reached target local-fs.target.
Feb 12 21:59:07.738484 systemd[1]: Reached target machines.target.
Feb 12 21:59:07.741831 systemd[1]: Starting ldconfig.service...
Feb 12 21:59:07.743843 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Feb 12 21:59:07.743918 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 12 21:59:07.745639 systemd[1]: Starting systemd-boot-update.service...
Feb 12 21:59:07.749354 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Feb 12 21:59:07.754437 systemd[1]: Starting systemd-machine-id-commit.service...
Feb 12 21:59:07.756038 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met.
Feb 12 21:59:07.756129 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met.
Feb 12 21:59:07.758093 systemd[1]: Starting systemd-tmpfiles-setup.service...
Feb 12 21:59:07.781418 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1568 (bootctl)
Feb 12 21:59:07.783613 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Feb 12 21:59:07.825825 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Feb 12 21:59:07.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:07.834108 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Feb 12 21:59:07.861413 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 12 21:59:07.874285 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 12 21:59:07.948653 systemd-fsck[1576]: fsck.fat 4.2 (2021-01-31)
Feb 12 21:59:07.948653 systemd-fsck[1576]: /dev/nvme0n1p1: 789 files, 115339/258078 clusters
Feb 12 21:59:07.950073 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Feb 12 21:59:07.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:07.953480 systemd[1]: Mounting boot.mount...
Feb 12 21:59:07.978029 systemd[1]: Mounted boot.mount.
Feb 12 21:59:08.033955 systemd[1]: Finished systemd-boot-update.service.
Feb 12 21:59:08.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:08.152832 systemd[1]: Finished systemd-tmpfiles-setup.service.
Feb 12 21:59:08.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:08.155690 systemd[1]: Starting audit-rules.service...
Feb 12 21:59:08.158764 systemd[1]: Starting clean-ca-certificates.service...
Feb 12 21:59:08.162016 systemd[1]: Starting systemd-journal-catalog-update.service...
Feb 12 21:59:08.164000 audit: BPF prog-id=30 op=LOAD
Feb 12 21:59:08.168937 systemd[1]: Starting systemd-resolved.service...
Feb 12 21:59:08.171000 audit: BPF prog-id=31 op=LOAD
Feb 12 21:59:08.175641 systemd[1]: Starting systemd-timesyncd.service...
Feb 12 21:59:08.178470 systemd[1]: Starting systemd-update-utmp.service...
Feb 12 21:59:08.191000 audit[1595]: SYSTEM_BOOT pid=1595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:08.196747 systemd[1]: Finished clean-ca-certificates.service.
Feb 12 21:59:08.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:08.198372 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 12 21:59:08.205641 systemd[1]: Finished systemd-update-utmp.service.
Feb 12 21:59:08.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:59:08.322000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Feb 12 21:59:08.322000 audit[1610]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcdb0ddfd0 a2=420 a3=0 items=0 ppid=1590 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 21:59:08.322000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Feb 12 21:59:08.325767 augenrules[1610]: No rules
Feb 12 21:59:08.323529 systemd[1]: Finished systemd-journal-catalog-update.service.
Feb 12 21:59:08.328110 systemd[1]: Finished audit-rules.service.
Feb 12 21:59:08.333231 systemd[1]: Started systemd-timesyncd.service.
Feb 12 21:59:08.335020 systemd[1]: Reached target time-set.target.
Feb 12 21:59:08.402774 systemd-resolved[1593]: Positive Trust Anchors:
Feb 12 21:59:08.402792 systemd-resolved[1593]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 12 21:59:08.402886 systemd-resolved[1593]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb 12 21:59:08.410707 systemd-timesyncd[1594]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org).
Feb 12 21:59:08.410792 systemd-timesyncd[1594]: Initial clock synchronization to Mon 2024-02-12 21:59:08.617849 UTC.
Feb 12 21:59:08.415616 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 12 21:59:08.418458 systemd[1]: Finished systemd-machine-id-commit.service.
Feb 12 21:59:08.433023 systemd-resolved[1593]: Defaulting to hostname 'linux'.
Feb 12 21:59:08.434856 systemd[1]: Started systemd-resolved.service.
Feb 12 21:59:08.436070 systemd[1]: Reached target network.target.
Feb 12 21:59:08.437113 systemd[1]: Reached target nss-lookup.target.
Feb 12 21:59:08.457118 systemd-networkd[1456]: eth0: Gained IPv6LL
Feb 12 21:59:08.460533 systemd[1]: Finished systemd-networkd-wait-online.service.
Feb 12 21:59:08.462695 systemd[1]: Reached target network-online.target.
Feb 12 21:59:08.589217 ldconfig[1567]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 12 21:59:08.603452 systemd[1]: Finished ldconfig.service.
Feb 12 21:59:08.605826 systemd[1]: Starting systemd-update-done.service...
Feb 12 21:59:08.615079 systemd[1]: Finished systemd-update-done.service.
Feb 12 21:59:08.616991 systemd[1]: Reached target sysinit.target.
Feb 12 21:59:08.618087 systemd[1]: Started motdgen.path.
Feb 12 21:59:08.619362 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Feb 12 21:59:08.621344 systemd[1]: Started logrotate.timer.
Feb 12 21:59:08.622580 systemd[1]: Started mdadm.timer.
Feb 12 21:59:08.623805 systemd[1]: Started systemd-tmpfiles-clean.timer.
Feb 12 21:59:08.625064 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 12 21:59:08.625098 systemd[1]: Reached target paths.target.
Feb 12 21:59:08.626217 systemd[1]: Reached target timers.target.
Feb 12 21:59:08.628332 systemd[1]: Listening on dbus.socket.
Feb 12 21:59:08.630904 systemd[1]: Starting docker.socket...
Feb 12 21:59:08.635312 systemd[1]: Listening on sshd.socket.
Feb 12 21:59:08.636392 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 12 21:59:08.636966 systemd[1]: Listening on docker.socket.
Feb 12 21:59:08.638046 systemd[1]: Reached target sockets.target.
Feb 12 21:59:08.639097 systemd[1]: Reached target basic.target.
Feb 12 21:59:08.640004 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb 12 21:59:08.640037 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb 12 21:59:08.641665 systemd[1]: Started amazon-ssm-agent.service.
Feb 12 21:59:08.644692 systemd[1]: Starting containerd.service...
Feb 12 21:59:08.655440 systemd[1]: Starting coreos-metadata-sshkeys@core.service...
Feb 12 21:59:08.659257 systemd[1]: Starting dbus.service...
Feb 12 21:59:08.661696 systemd[1]: Starting enable-oem-cloudinit.service...
Feb 12 21:59:08.665195 systemd[1]: Starting extend-filesystems.service...
Feb 12 21:59:08.666580 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Feb 12 21:59:08.668851 systemd[1]: Starting motdgen.service...
Feb 12 21:59:08.671586 systemd[1]: Started nvidia.service.
Feb 12 21:59:08.675135 systemd[1]: Starting prepare-cni-plugins.service...
Feb 12 21:59:08.678807 systemd[1]: Starting prepare-critools.service...
Feb 12 21:59:08.687905 systemd[1]: Starting ssh-key-proc-cmdline.service...
Feb 12 21:59:08.690923 systemd[1]: Starting sshd-keygen.service...
Feb 12 21:59:08.696232 systemd[1]: Starting systemd-logind.service...
Feb 12 21:59:08.697839 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 12 21:59:08.698037 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 12 21:59:08.698834 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 12 21:59:08.699887 systemd[1]: Starting update-engine.service...
Feb 12 21:59:08.703727 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Feb 12 21:59:08.844264 jq[1627]: false
Feb 12 21:59:08.844684 jq[1637]: true
Feb 12 21:59:08.752202 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 12 21:59:08.752439 systemd[1]: Finished ssh-key-proc-cmdline.service.
Feb 12 21:59:08.900663 tar[1640]: crictl
Feb 12 21:59:08.909787 tar[1639]: ./
Feb 12 21:59:08.909787 tar[1639]: ./loopback
Feb 12 21:59:08.871387 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 12 21:59:08.871714 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Feb 12 21:59:08.937066 dbus-daemon[1626]: [system] SELinux support is enabled
Feb 12 21:59:08.953091 systemd[1]: Started dbus.service.
Feb 12 21:59:08.957680 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 12 21:59:08.957726 systemd[1]: Reached target system-config.target.
Feb 12 21:59:08.958993 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 12 21:59:08.959021 systemd[1]: Reached target user-config.target.
Feb 12 21:59:08.977758 dbus-daemon[1626]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1456 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Feb 12 21:59:08.997689 jq[1645]: true
Feb 12 21:59:08.989414 systemd[1]: Starting systemd-hostnamed.service...
Feb 12 21:59:08.982031 dbus-daemon[1626]: [system] Successfully activated service 'org.freedesktop.systemd1'
Feb 12 21:59:09.052799 extend-filesystems[1628]: Found nvme0n1
Feb 12 21:59:09.057395 extend-filesystems[1628]: Found nvme0n1p1
Feb 12 21:59:09.061054 extend-filesystems[1628]: Found nvme0n1p2
Feb 12 21:59:09.062953 systemd[1]: motdgen.service: Deactivated successfully.
Feb 12 21:59:09.063469 systemd[1]: Finished motdgen.service.
Feb 12 21:59:09.065149 extend-filesystems[1628]: Found nvme0n1p3
Feb 12 21:59:09.068617 extend-filesystems[1628]: Found usr
Feb 12 21:59:09.071222 extend-filesystems[1628]: Found nvme0n1p4
Feb 12 21:59:09.073107 extend-filesystems[1628]: Found nvme0n1p6
Feb 12 21:59:09.074812 extend-filesystems[1628]: Found nvme0n1p7
Feb 12 21:59:09.075879 extend-filesystems[1628]: Found nvme0n1p9
Feb 12 21:59:09.076973 extend-filesystems[1628]: Checking size of /dev/nvme0n1p9
Feb 12 21:59:09.109209 extend-filesystems[1628]: Resized partition /dev/nvme0n1p9
Feb 12 21:59:09.125196 update_engine[1636]: I0212 21:59:09.124569  1636 main.cc:92] Flatcar Update Engine starting
Feb 12 21:59:09.130777 systemd[1]: Started update-engine.service.
Feb 12 21:59:09.136057 systemd[1]: Started locksmithd.service.
Feb 12 21:59:09.138245 update_engine[1636]: I0212 21:59:09.138206  1636 update_check_scheduler.cc:74] Next update check in 5m22s
Feb 12 21:59:09.139513 extend-filesystems[1689]: resize2fs 1.46.5 (30-Dec-2021)
Feb 12 21:59:09.146910 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Feb 12 21:59:09.158077 amazon-ssm-agent[1623]: 2024/02/12 21:59:09 Failed to load instance info from vault. RegistrationKey does not exist.
Feb 12 21:59:09.171543 amazon-ssm-agent[1623]: Initializing new seelog logger
Feb 12 21:59:09.171883 amazon-ssm-agent[1623]: New Seelog Logger Creation Complete
Feb 12 21:59:09.172003 amazon-ssm-agent[1623]: 2024/02/12 21:59:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 12 21:59:09.172003 amazon-ssm-agent[1623]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 12 21:59:09.172267 amazon-ssm-agent[1623]: 2024/02/12 21:59:09 processing appconfig overrides
Feb 12 21:59:09.241908 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Feb 12 21:59:09.270406 extend-filesystems[1689]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Feb 12 21:59:09.270406 extend-filesystems[1689]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 12 21:59:09.270406 extend-filesystems[1689]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Feb 12 21:59:09.277294 extend-filesystems[1628]: Resized filesystem in /dev/nvme0n1p9
Feb 12 21:59:09.271666 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 12 21:59:09.287096 bash[1694]: Updated "/home/core/.ssh/authorized_keys"
Feb 12 21:59:09.272111 systemd[1]: Finished extend-filesystems.service.
Feb 12 21:59:09.278321 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Feb 12 21:59:09.308942 systemd-logind[1635]: Watching system buttons on /dev/input/event1 (Power Button)
Feb 12 21:59:09.308977 systemd-logind[1635]: Watching system buttons on /dev/input/event2 (Sleep Button)
Feb 12 21:59:09.309001 systemd-logind[1635]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Feb 12 21:59:09.309222 systemd-logind[1635]: New seat seat0.
Feb 12 21:59:09.311459 systemd[1]: Started systemd-logind.service.
Feb 12 21:59:09.328209 env[1642]: time="2024-02-12T21:59:09.328142036Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Feb 12 21:59:09.366170 tar[1639]: ./bandwidth
Feb 12 21:59:09.404394 dbus-daemon[1626]: [system] Successfully activated service 'org.freedesktop.hostname1'
Feb 12 21:59:09.404745 systemd[1]: Started systemd-hostnamed.service.
Feb 12 21:59:09.405779 dbus-daemon[1626]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1668 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Feb 12 21:59:09.412680 systemd[1]: Starting polkit.service...
Feb 12 21:59:09.446430 polkitd[1714]: Started polkitd version 121
Feb 12 21:59:09.470438 systemd[1]: nvidia.service: Deactivated successfully.
Feb 12 21:59:09.473727 polkitd[1714]: Loading rules from directory /etc/polkit-1/rules.d
Feb 12 21:59:09.473816 polkitd[1714]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 12 21:59:09.491589 polkitd[1714]: Finished loading, compiling and executing 2 rules
Feb 12 21:59:09.492299 dbus-daemon[1626]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Feb 12 21:59:09.492499 systemd[1]: Started polkit.service.
Feb 12 21:59:09.493698 polkitd[1714]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 12 21:59:09.528003 env[1642]: time="2024-02-12T21:59:09.527887335Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 12 21:59:09.531337 systemd-hostnamed[1668]: Hostname set to <ip-172-31-17-60> (transient)
Feb 12 21:59:09.531463 systemd-resolved[1593]: System hostname changed to 'ip-172-31-17-60'.
Feb 12 21:59:09.534160 env[1642]: time="2024-02-12T21:59:09.534122192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 12 21:59:09.538398 env[1642]: time="2024-02-12T21:59:09.538350618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 12 21:59:09.538550 env[1642]: time="2024-02-12T21:59:09.538528835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 12 21:59:09.538953 env[1642]: time="2024-02-12T21:59:09.538922903Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 12 21:59:09.541918 env[1642]: time="2024-02-12T21:59:09.541011451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 12 21:59:09.542618 env[1642]: time="2024-02-12T21:59:09.542593565Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Feb 12 21:59:09.542699 env[1642]: time="2024-02-12T21:59:09.542683614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 12 21:59:09.542892 env[1642]: time="2024-02-12T21:59:09.542872587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 12 21:59:09.543274 env[1642]: time="2024-02-12T21:59:09.543253419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 12 21:59:09.547438 env[1642]: time="2024-02-12T21:59:09.547407122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 12 21:59:09.547552 env[1642]: time="2024-02-12T21:59:09.547534646Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 12 21:59:09.547710 env[1642]: time="2024-02-12T21:59:09.547690330Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Feb 12 21:59:09.547807 env[1642]: time="2024-02-12T21:59:09.547790784Z" level=info msg="metadata content store policy set" policy=shared
Feb 12 21:59:09.554662 env[1642]: time="2024-02-12T21:59:09.554621982Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 12 21:59:09.554829 env[1642]: time="2024-02-12T21:59:09.554811140Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 12 21:59:09.554961 env[1642]: time="2024-02-12T21:59:09.554913076Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 12 21:59:09.555102 env[1642]: time="2024-02-12T21:59:09.555054879Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 12 21:59:09.555215 env[1642]: time="2024-02-12T21:59:09.555085742Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 12 21:59:09.555299 env[1642]: time="2024-02-12T21:59:09.555283780Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 12 21:59:09.555792 env[1642]: time="2024-02-12T21:59:09.555358161Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 12 21:59:09.555792 env[1642]: time="2024-02-12T21:59:09.555382774Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 12 21:59:09.555792 env[1642]: time="2024-02-12T21:59:09.555404427Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Feb 12 21:59:09.555792 env[1642]: time="2024-02-12T21:59:09.555426449Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 12 21:59:09.555792 env[1642]: time="2024-02-12T21:59:09.555448783Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 12 21:59:09.555792 env[1642]: time="2024-02-12T21:59:09.555468156Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 12 21:59:09.555792 env[1642]: time="2024-02-12T21:59:09.555603431Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 12 21:59:09.555792 env[1642]: time="2024-02-12T21:59:09.555707701Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 12 21:59:09.556565 env[1642]: time="2024-02-12T21:59:09.556530751Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 12 21:59:09.556817 env[1642]: time="2024-02-12T21:59:09.556795459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 12 21:59:09.556939 env[1642]: time="2024-02-12T21:59:09.556923329Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 12 21:59:09.557775 env[1642]: time="2024-02-12T21:59:09.557747592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 12 21:59:09.557911 env[1642]: time="2024-02-12T21:59:09.557891515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 12 21:59:09.559172 env[1642]: time="2024-02-12T21:59:09.559146327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 12 21:59:09.563102 env[1642]: time="2024-02-12T21:59:09.563073396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 12 21:59:09.563216 env[1642]: time="2024-02-12T21:59:09.563200351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 12 21:59:09.563287 env[1642]: time="2024-02-12T21:59:09.563274662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 12 21:59:09.563371 env[1642]: time="2024-02-12T21:59:09.563355443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 12 21:59:09.563443 env[1642]: time="2024-02-12T21:59:09.563429834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 12 21:59:09.563526 env[1642]: time="2024-02-12T21:59:09.563514938Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 12 21:59:09.563759 env[1642]: time="2024-02-12T21:59:09.563742553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 12 21:59:09.563839 env[1642]: time="2024-02-12T21:59:09.563826668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 12 21:59:09.563940 env[1642]: time="2024-02-12T21:59:09.563924962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 12 21:59:09.564026 env[1642]: time="2024-02-12T21:59:09.564012295Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 12 21:59:09.564106 env[1642]: time="2024-02-12T21:59:09.564084757Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Feb 12 21:59:09.564176 env[1642]: time="2024-02-12T21:59:09.564160476Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 12 21:59:09.564262 env[1642]: time="2024-02-12T21:59:09.564247247Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Feb 12 21:59:09.564363 env[1642]: time="2024-02-12T21:59:09.564351416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 12 21:59:09.564749 env[1642]: time="2024-02-12T21:59:09.564678560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 12 21:59:09.566567 env[1642]: time="2024-02-12T21:59:09.564917269Z" level=info msg="Connect containerd service"
Feb 12 21:59:09.566567 env[1642]: time="2024-02-12T21:59:09.564979253Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 12 21:59:09.566567 env[1642]: time="2024-02-12T21:59:09.565951751Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 12 21:59:09.566567 env[1642]: time="2024-02-12T21:59:09.566318381Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 12 21:59:09.566567 env[1642]: time="2024-02-12T21:59:09.566372867Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 12 21:59:09.566567 env[1642]: time="2024-02-12T21:59:09.566432259Z" level=info msg="containerd successfully booted in 0.305498s"
Feb 12 21:59:09.566948 systemd[1]: Started containerd.service.
Feb 12 21:59:09.603280 tar[1639]: ./ptp
Feb 12 21:59:09.606035 env[1642]: time="2024-02-12T21:59:09.605970138Z" level=info msg="Start subscribing containerd event"
Feb 12 21:59:09.606163 env[1642]: time="2024-02-12T21:59:09.606058258Z" level=info msg="Start recovering state"
Feb 12 21:59:09.606163 env[1642]: time="2024-02-12T21:59:09.606139887Z" level=info msg="Start event monitor"
Feb 12 21:59:09.606163 env[1642]: time="2024-02-12T21:59:09.606154740Z" level=info msg="Start snapshots syncer"
Feb 12 21:59:09.606273 env[1642]: time="2024-02-12T21:59:09.606167662Z" level=info msg="Start cni network conf syncer for default"
Feb 12 21:59:09.606273 env[1642]: time="2024-02-12T21:59:09.606178972Z" level=info msg="Start streaming server"
Feb 12 21:59:09.746800 tar[1639]: ./vlan
Feb 12 21:59:09.828306 coreos-metadata[1625]: Feb 12 21:59:09.826 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 12 21:59:09.832731 coreos-metadata[1625]: Feb 12 21:59:09.832 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1
Feb 12 21:59:09.833916 coreos-metadata[1625]: Feb 12 21:59:09.833 INFO Fetch successful
Feb 12 21:59:09.834134 coreos-metadata[1625]: Feb 12 21:59:09.834 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1
Feb 12 21:59:09.834959 coreos-metadata[1625]: Feb 12 21:59:09.834 INFO Fetch successful
Feb 12 21:59:09.837313 unknown[1625]: wrote ssh authorized keys file for user: core
Feb 12 21:59:09.865384 update-ssh-keys[1798]: Updated "/home/core/.ssh/authorized_keys"
Feb 12 21:59:09.866558 systemd[1]: Finished coreos-metadata-sshkeys@core.service.
Feb 12 21:59:09.892077 tar[1639]: ./host-device
Feb 12 21:59:10.032231 tar[1639]: ./tuning
Feb 12 21:59:10.112801 tar[1639]: ./vrf
Feb 12 21:59:10.204163 tar[1639]: ./sbr
Feb 12 21:59:10.288400 tar[1639]: ./tap
Feb 12 21:59:10.372809 systemd[1]: Finished prepare-critools.service.
Feb 12 21:59:10.393349 tar[1639]: ./dhcp
Feb 12 21:59:10.545667 tar[1639]: ./static
Feb 12 21:59:10.590403 tar[1639]: ./firewall
Feb 12 21:59:10.665073 tar[1639]: ./macvlan
Feb 12 21:59:10.727420 tar[1639]: ./dummy
Feb 12 21:59:10.794348 tar[1639]: ./bridge
Feb 12 21:59:10.869639 tar[1639]: ./ipvlan
Feb 12 21:59:10.929875 tar[1639]: ./portmap
Feb 12 21:59:10.987482 tar[1639]: ./host-local
Feb 12 21:59:11.051604 systemd[1]: Finished prepare-cni-plugins.service.
Feb 12 21:59:11.129052 locksmithd[1690]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 12 21:59:11.428781 sshd_keygen[1666]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 12 21:59:11.453173 systemd[1]: Finished sshd-keygen.service.
Feb 12 21:59:11.456199 systemd[1]: Starting issuegen.service...
Feb 12 21:59:11.462464 systemd[1]: issuegen.service: Deactivated successfully.
Feb 12 21:59:11.462631 systemd[1]: Finished issuegen.service.
Feb 12 21:59:11.465685 systemd[1]: Starting systemd-user-sessions.service...
Feb 12 21:59:11.474408 systemd[1]: Finished systemd-user-sessions.service.
Feb 12 21:59:11.477935 systemd[1]: Started getty@tty1.service.
Feb 12 21:59:11.480702 systemd[1]: Started serial-getty@ttyS0.service.
Feb 12 21:59:11.482139 systemd[1]: Reached target getty.target.
Feb 12 21:59:11.483379 systemd[1]: Reached target multi-user.target.
Feb 12 21:59:11.486284 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Feb 12 21:59:11.496736 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb 12 21:59:11.496974 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Feb 12 21:59:11.498665 systemd[1]: Startup finished in 775ms (kernel) + 8.635s (initrd) + 10.021s (userspace) = 19.432s.
Feb 12 21:59:16.667208 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO Create new startup processor
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [LongRunningPluginsManager] registered plugins: {}
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO Initializing bookkeeping folders
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO removing the completed state files
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO Initializing bookkeeping folders for long running plugins
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO Initializing replies folder for MDS reply requests that couldn't reach the service
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO Initializing healthcheck folders for long running plugins
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO Initializing locations for inventory plugin
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO Initializing default location for custom inventory
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO Initializing default location for file inventory
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO Initializing default location for role inventory
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO Init the cloudwatchlogs publisher
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [instanceID=i-0f7378af3ad647ecd] Successfully loaded platform independent plugin aws:softwareInventory
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [instanceID=i-0f7378af3ad647ecd] Successfully loaded platform independent plugin aws:runPowerShellScript
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [instanceID=i-0f7378af3ad647ecd] Successfully loaded platform independent plugin aws:runDockerAction
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [instanceID=i-0f7378af3ad647ecd] Successfully loaded platform independent plugin aws:runDocument
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [instanceID=i-0f7378af3ad647ecd] Successfully loaded platform independent plugin aws:updateSsmAgent
Feb 12 21:59:16.667635 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [instanceID=i-0f7378af3ad647ecd] Successfully loaded platform independent plugin aws:configureDocker
Feb 12 21:59:16.668330 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [instanceID=i-0f7378af3ad647ecd] Successfully loaded platform independent plugin aws:refreshAssociation
Feb 12 21:59:16.668330 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [instanceID=i-0f7378af3ad647ecd] Successfully loaded platform independent plugin aws:configurePackage
Feb 12 21:59:16.668330 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [instanceID=i-0f7378af3ad647ecd] Successfully loaded platform independent plugin aws:downloadContent
Feb 12 21:59:16.668330 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [instanceID=i-0f7378af3ad647ecd] Successfully loaded platform dependent plugin aws:runShellScript
Feb 12 21:59:16.668330 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0
Feb 12 21:59:16.668330 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO OS: linux, Arch: amd64
Feb 12 21:59:16.675116 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [OfflineService] Starting document processing engine...
Feb 12 21:59:16.675736 amazon-ssm-agent[1623]: datastore file /var/lib/amazon/ssm/i-0f7378af3ad647ecd/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute
Feb 12 21:59:16.772357 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [OfflineService] [EngineProcessor] Starting
Feb 12 21:59:16.866890 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [OfflineService] [EngineProcessor] Initial processing
Feb 12 21:59:16.962120 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [OfflineService] Starting message polling
Feb 12 21:59:17.057003 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [OfflineService] Starting send replies to MDS
Feb 12 21:59:17.151998 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessagingDeliveryService] Starting document processing engine...
Feb 12 21:59:17.247448 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessagingDeliveryService] [EngineProcessor] Starting
Feb 12 21:59:17.342765 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing
Feb 12 21:59:17.438633 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessagingDeliveryService] Starting message polling
Feb 12 21:59:17.534550 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessagingDeliveryService] Starting send replies to MDS
Feb 12 21:59:17.630637 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [instanceID=i-0f7378af3ad647ecd] Starting association polling
Feb 12 21:59:17.726951 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting
Feb 12 21:59:17.823244 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessagingDeliveryService] [Association] Launching response handler
Feb 12 21:59:17.920157 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing
Feb 12 21:59:18.017011 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service
Feb 12 21:59:18.041607 systemd[1]: Created slice system-sshd.slice.
Feb 12 21:59:18.043561 systemd[1]: Started sshd@0-172.31.17.60:22-139.178.89.65:40938.service.
Feb 12 21:59:18.113997 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized
Feb 12 21:59:18.211102 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessageGatewayService] Starting session document processing engine...
Feb 12 21:59:18.222250 sshd[1838]: Accepted publickey for core from 139.178.89.65 port 40938 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:59:18.224782 sshd[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:59:18.237521 systemd[1]: Created slice user-500.slice.
Feb 12 21:59:18.239237 systemd[1]: Starting user-runtime-dir@500.service...
Feb 12 21:59:18.243849 systemd-logind[1635]: New session 1 of user core.
Feb 12 21:59:18.262366 systemd[1]: Finished user-runtime-dir@500.service.
Feb 12 21:59:18.264464 systemd[1]: Starting user@500.service...
Feb 12 21:59:18.269181 (systemd)[1841]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:59:18.308661 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [LongRunningPluginsManager] starting long running plugin manager
Feb 12 21:59:18.365699 systemd[1841]: Queued start job for default target default.target.
Feb 12 21:59:18.366587 systemd[1841]: Reached target paths.target.
Feb 12 21:59:18.366621 systemd[1841]: Reached target sockets.target.
Feb 12 21:59:18.366640 systemd[1841]: Reached target timers.target.
Feb 12 21:59:18.366657 systemd[1841]: Reached target basic.target.
Feb 12 21:59:18.366713 systemd[1841]: Reached target default.target.
Feb 12 21:59:18.366752 systemd[1841]: Startup finished in 89ms.
Feb 12 21:59:18.367201 systemd[1]: Started user@500.service.
Feb 12 21:59:18.368715 systemd[1]: Started session-1.scope.
Feb 12 21:59:18.406255 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute
Feb 12 21:59:18.503953 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [HealthCheck] HealthCheck reporting agent health.
Feb 12 21:59:18.519834 systemd[1]: Started sshd@1-172.31.17.60:22-139.178.89.65:40950.service.
Feb 12 21:59:18.601841 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessageGatewayService] [EngineProcessor] Starting
Feb 12 21:59:18.689439 sshd[1850]: Accepted publickey for core from 139.178.89.65 port 40950 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:59:18.690753 sshd[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:59:18.696666 systemd[1]: Started session-2.scope.
Feb 12 21:59:18.698013 systemd-logind[1635]: New session 2 of user core.
Feb 12 21:59:18.699841 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module.
Feb 12 21:59:18.798176 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0f7378af3ad647ecd, requestId: f12a2f01-3f02-4859-8272-5cd942eccb93
Feb 12 21:59:18.828970 sshd[1850]: pam_unix(sshd:session): session closed for user core
Feb 12 21:59:18.832337 systemd[1]: sshd@1-172.31.17.60:22-139.178.89.65:40950.service: Deactivated successfully.
Feb 12 21:59:18.833691 systemd-logind[1635]: Session 2 logged out. Waiting for processes to exit.
Feb 12 21:59:18.833799 systemd[1]: session-2.scope: Deactivated successfully.
Feb 12 21:59:18.835032 systemd-logind[1635]: Removed session 2.
Feb 12 21:59:18.854719 systemd[1]: Started sshd@2-172.31.17.60:22-139.178.89.65:40954.service.
Feb 12 21:59:18.896801 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessageGatewayService] listening reply.
Feb 12 21:59:18.995533 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck
Feb 12 21:59:19.022033 sshd[1856]: Accepted publickey for core from 139.178.89.65 port 40954 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:59:19.023676 sshd[1856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:59:19.029761 systemd[1]: Started session-3.scope.
Feb 12 21:59:19.030573 systemd-logind[1635]: New session 3 of user core.
Feb 12 21:59:19.094431 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [StartupProcessor] Executing startup processor tasks
Feb 12 21:59:19.150820 sshd[1856]: pam_unix(sshd:session): session closed for user core
Feb 12 21:59:19.153861 systemd[1]: sshd@2-172.31.17.60:22-139.178.89.65:40954.service: Deactivated successfully.
Feb 12 21:59:19.154734 systemd[1]: session-3.scope: Deactivated successfully.
Feb 12 21:59:19.155465 systemd-logind[1635]: Session 3 logged out. Waiting for processes to exit.
Feb 12 21:59:19.163164 systemd-logind[1635]: Removed session 3.
Feb 12 21:59:19.175773 systemd[1]: Started sshd@3-172.31.17.60:22-139.178.89.65:40956.service.
Feb 12 21:59:19.193450 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running
Feb 12 21:59:19.292812 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk
Feb 12 21:59:19.337982 sshd[1862]: Accepted publickey for core from 139.178.89.65 port 40956 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:59:19.340190 sshd[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:59:19.346321 systemd[1]: Started session-4.scope.
Feb 12 21:59:19.347258 systemd-logind[1635]: New session 4 of user core.
Feb 12 21:59:19.392380 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2
Feb 12 21:59:19.473006 sshd[1862]: pam_unix(sshd:session): session closed for user core
Feb 12 21:59:19.476585 systemd[1]: sshd@3-172.31.17.60:22-139.178.89.65:40956.service: Deactivated successfully.
Feb 12 21:59:19.477493 systemd[1]: session-4.scope: Deactivated successfully.
Feb 12 21:59:19.478605 systemd-logind[1635]: Session 4 logged out. Waiting for processes to exit.
Feb 12 21:59:19.479667 systemd-logind[1635]: Removed session 4.
Feb 12 21:59:19.492009 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0f7378af3ad647ecd?role=subscribe&stream=input
Feb 12 21:59:19.498651 systemd[1]: Started sshd@4-172.31.17.60:22-139.178.89.65:40970.service.
Feb 12 21:59:19.591973 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0f7378af3ad647ecd?role=subscribe&stream=input
Feb 12 21:59:19.666778 sshd[1868]: Accepted publickey for core from 139.178.89.65 port 40970 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:59:19.668499 sshd[1868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:59:19.674807 systemd[1]: Started session-5.scope.
Feb 12 21:59:19.675537 systemd-logind[1635]: New session 5 of user core.
Feb 12 21:59:19.692024 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessageGatewayService] Starting receiving message from control channel
Feb 12 21:59:19.792056 amazon-ssm-agent[1623]: 2024-02-12 21:59:16 INFO [MessageGatewayService] [EngineProcessor] Initial processing
Feb 12 21:59:19.809413 sudo[1872]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 12 21:59:19.809707 sudo[1872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Feb 12 21:59:19.892325 amazon-ssm-agent[1623]: 2024-02-12 21:59:19 INFO [HealthCheck] HealthCheck reporting agent health.
Feb 12 21:59:20.399722 systemd[1]: Reloading.
Feb 12 21:59:20.490724 /usr/lib/systemd/system-generators/torcx-generator[1901]: time="2024-02-12T21:59:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb 12 21:59:20.500961 /usr/lib/systemd/system-generators/torcx-generator[1901]: time="2024-02-12T21:59:20Z" level=info msg="torcx already run"
Feb 12 21:59:20.621901 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 21:59:20.621925 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 21:59:20.648187 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 21:59:20.773111 systemd[1]: Started kubelet.service.
Feb 12 21:59:20.795010 systemd[1]: Starting coreos-metadata.service...
Feb 12 21:59:20.894405 kubelet[1953]: E0212 21:59:20.894272    1953 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 12 21:59:20.897406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 12 21:59:20.897578 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 12 21:59:20.965442 coreos-metadata[1960]: Feb 12 21:59:20.965 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 12 21:59:20.966226 coreos-metadata[1960]: Feb 12 21:59:20.966 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1
Feb 12 21:59:20.966896 coreos-metadata[1960]: Feb 12 21:59:20.966 INFO Fetch successful
Feb 12 21:59:20.966896 coreos-metadata[1960]: Feb 12 21:59:20.966 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1
Feb 12 21:59:20.967709 coreos-metadata[1960]: Feb 12 21:59:20.967 INFO Fetch successful
Feb 12 21:59:20.967772 coreos-metadata[1960]: Feb 12 21:59:20.967 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1
Feb 12 21:59:20.968707 coreos-metadata[1960]: Feb 12 21:59:20.968 INFO Fetch successful
Feb 12 21:59:20.968707 coreos-metadata[1960]: Feb 12 21:59:20.968 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1
Feb 12 21:59:20.969485 coreos-metadata[1960]: Feb 12 21:59:20.969 INFO Fetch successful
Feb 12 21:59:20.969688 coreos-metadata[1960]: Feb 12 21:59:20.969 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1
Feb 12 21:59:20.970391 coreos-metadata[1960]: Feb 12 21:59:20.970 INFO Fetch successful
Feb 12 21:59:20.970473 coreos-metadata[1960]: Feb 12 21:59:20.970 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1
Feb 12 21:59:20.971261 coreos-metadata[1960]: Feb 12 21:59:20.971 INFO Fetch successful
Feb 12 21:59:20.971261 coreos-metadata[1960]: Feb 12 21:59:20.971 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1
Feb 12 21:59:20.972062 coreos-metadata[1960]: Feb 12 21:59:20.972 INFO Fetch successful
Feb 12 21:59:20.972062 coreos-metadata[1960]: Feb 12 21:59:20.972 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1
Feb 12 21:59:20.973058 coreos-metadata[1960]: Feb 12 21:59:20.972 INFO Fetch successful
Feb 12 21:59:20.982977 systemd[1]: Finished coreos-metadata.service.
Feb 12 21:59:21.342026 systemd[1]: Stopped kubelet.service.
Feb 12 21:59:21.362042 systemd[1]: Reloading.
Feb 12 21:59:21.466518 /usr/lib/systemd/system-generators/torcx-generator[2021]: time="2024-02-12T21:59:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb 12 21:59:21.466977 /usr/lib/systemd/system-generators/torcx-generator[2021]: time="2024-02-12T21:59:21Z" level=info msg="torcx already run"
Feb 12 21:59:21.573519 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 21:59:21.573545 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 21:59:21.594759 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 21:59:21.720252 systemd[1]: Started kubelet.service.
Feb 12 21:59:21.786682 kubelet[2071]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 12 21:59:21.786682 kubelet[2071]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 12 21:59:21.786682 kubelet[2071]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 12 21:59:21.787290 kubelet[2071]: I0212 21:59:21.786740    2071 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 12 21:59:22.206314 kubelet[2071]: I0212 21:59:22.206277    2071 server.go:467] "Kubelet version" kubeletVersion="v1.28.1"
Feb 12 21:59:22.206314 kubelet[2071]: I0212 21:59:22.206308    2071 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 12 21:59:22.206616 kubelet[2071]: I0212 21:59:22.206594    2071 server.go:895] "Client rotation is on, will bootstrap in background"
Feb 12 21:59:22.209315 kubelet[2071]: I0212 21:59:22.209287    2071 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 12 21:59:22.217858 kubelet[2071]: I0212 21:59:22.217829    2071 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 12 21:59:22.218149 kubelet[2071]: I0212 21:59:22.218135    2071 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 12 21:59:22.218427 kubelet[2071]: I0212 21:59:22.218407    2071 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 12 21:59:22.218709 kubelet[2071]: I0212 21:59:22.218437    2071 topology_manager.go:138] "Creating topology manager with none policy"
Feb 12 21:59:22.218709 kubelet[2071]: I0212 21:59:22.218449    2071 container_manager_linux.go:301] "Creating device plugin manager"
Feb 12 21:59:22.218709 kubelet[2071]: I0212 21:59:22.218703    2071 state_mem.go:36] "Initialized new in-memory state store"
Feb 12 21:59:22.218990 kubelet[2071]: I0212 21:59:22.218963    2071 kubelet.go:393] "Attempting to sync node with API server"
Feb 12 21:59:22.218990 kubelet[2071]: I0212 21:59:22.218988    2071 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 12 21:59:22.219064 kubelet[2071]: I0212 21:59:22.219020    2071 kubelet.go:309] "Adding apiserver pod source"
Feb 12 21:59:22.219064 kubelet[2071]: I0212 21:59:22.219043    2071 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 12 21:59:22.220062 kubelet[2071]: E0212 21:59:22.220045    2071 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:22.220132 kubelet[2071]: E0212 21:59:22.220101    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:22.221269 kubelet[2071]: I0212 21:59:22.221251    2071 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb 12 21:59:22.222126 kubelet[2071]: W0212 21:59:22.222107    2071 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 12 21:59:22.223397 kubelet[2071]: I0212 21:59:22.223378    2071 server.go:1232] "Started kubelet"
Feb 12 21:59:22.225117 kubelet[2071]: I0212 21:59:22.225101    2071 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Feb 12 21:59:22.225773 kubelet[2071]: I0212 21:59:22.225753    2071 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 12 21:59:22.226377 kubelet[2071]: I0212 21:59:22.226362    2071 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 12 21:59:22.228231 kubelet[2071]: I0212 21:59:22.228214    2071 server.go:462] "Adding debug handlers to kubelet server"
Feb 12 21:59:22.230297 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Feb 12 21:59:22.230450 kubelet[2071]: I0212 21:59:22.230432    2071 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 12 21:59:22.234179 kubelet[2071]: E0212 21:59:22.234159    2071 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb 12 21:59:22.234395 kubelet[2071]: E0212 21:59:22.234379    2071 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 12 21:59:22.244518 kubelet[2071]: I0212 21:59:22.244485    2071 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 12 21:59:22.245247 kubelet[2071]: I0212 21:59:22.245230    2071 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 12 21:59:22.246570 kubelet[2071]: I0212 21:59:22.246556    2071 reconciler_new.go:29] "Reconciler: start to sync state"
Feb 12 21:59:22.286518 kubelet[2071]: I0212 21:59:22.286491    2071 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 12 21:59:22.286686 kubelet[2071]: I0212 21:59:22.286677    2071 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 12 21:59:22.286759 kubelet[2071]: I0212 21:59:22.286752    2071 state_mem.go:36] "Initialized new in-memory state store"
Feb 12 21:59:22.289738 kubelet[2071]: I0212 21:59:22.289717    2071 policy_none.go:49] "None policy: Start"
Feb 12 21:59:22.291395 kubelet[2071]: I0212 21:59:22.291375    2071 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb 12 21:59:22.291561 kubelet[2071]: I0212 21:59:22.291549    2071 state_mem.go:35] "Initializing new in-memory state store"
Feb 12 21:59:22.298836 systemd[1]: Created slice kubepods.slice.
Feb 12 21:59:22.303526 kubelet[2071]: E0212 21:59:22.303407    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763c8e9b56", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 223336278, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 223336278, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.303999 kubelet[2071]: W0212 21:59:22.303984    2071 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.17.60" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 12 21:59:22.304150 kubelet[2071]: E0212 21:59:22.304140    2071 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.17.60" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 12 21:59:22.304259 kubelet[2071]: W0212 21:59:22.304250    2071 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 12 21:59:22.304332 kubelet[2071]: E0212 21:59:22.304325    2071 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 12 21:59:22.304507 kubelet[2071]: E0212 21:59:22.304428    2071 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.17.60\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms"
Feb 12 21:59:22.304621 kubelet[2071]: W0212 21:59:22.304611    2071 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb 12 21:59:22.304741 kubelet[2071]: E0212 21:59:22.304734    2071 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb 12 21:59:22.307722 systemd[1]: Created slice kubepods-burstable.slice.
Feb 12 21:59:22.309544 kubelet[2071]: E0212 21:59:22.309408    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763d36919b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 234343835, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 234343835, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.313696 systemd[1]: Created slice kubepods-besteffort.slice.
Feb 12 21:59:22.320800 kubelet[2071]: I0212 21:59:22.320766    2071 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 12 21:59:22.321173 kubelet[2071]: I0212 21:59:22.321070    2071 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 12 21:59:22.324427 kubelet[2071]: E0212 21:59:22.324361    2071 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.17.60\" not found"
Feb 12 21:59:22.327443 kubelet[2071]: E0212 21:59:22.327344    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763fbf08bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.17.60 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276841659, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276841659, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.329243 kubelet[2071]: E0212 21:59:22.329103    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763fbf5a4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.17.60 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276862539, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276862539, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.339982 kubelet[2071]: E0212 21:59:22.339833    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763fbf6b30", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.17.60 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276866864, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276866864, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.346575 kubelet[2071]: E0212 21:59:22.346444    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c7642805e87", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 323066503, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 323066503, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.349465 kubelet[2071]: I0212 21:59:22.349431    2071 kubelet_node_status.go:70] "Attempting to register node" node="172.31.17.60"
Feb 12 21:59:22.353967 kubelet[2071]: E0212 21:59:22.353942    2071 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.17.60"
Feb 12 21:59:22.354198 kubelet[2071]: E0212 21:59:22.354098    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763fbf08bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.17.60 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276841659, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 347861214, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events "172.31.17.60.17b33c763fbf08bb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.359367 kubelet[2071]: E0212 21:59:22.359271    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763fbf5a4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.17.60 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276862539, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 347866137, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events "172.31.17.60.17b33c763fbf5a4b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.363773 kubelet[2071]: E0212 21:59:22.363158    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763fbf6b30", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.17.60 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276866864, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 347868774, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events "172.31.17.60.17b33c763fbf6b30" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.392634 kubelet[2071]: I0212 21:59:22.392594    2071 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 12 21:59:22.394334 kubelet[2071]: I0212 21:59:22.394283    2071 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 12 21:59:22.394334 kubelet[2071]: I0212 21:59:22.394326    2071 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 12 21:59:22.394498 kubelet[2071]: I0212 21:59:22.394349    2071 kubelet.go:2303] "Starting kubelet main sync loop"
Feb 12 21:59:22.394498 kubelet[2071]: E0212 21:59:22.394418    2071 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Feb 12 21:59:22.404211 kubelet[2071]: W0212 21:59:22.404136    2071 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb 12 21:59:22.404211 kubelet[2071]: E0212 21:59:22.404229    2071 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb 12 21:59:22.464053 amazon-ssm-agent[1623]: 2024-02-12 21:59:22 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds.
Feb 12 21:59:22.511662 kubelet[2071]: E0212 21:59:22.511612    2071 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.17.60\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms"
Feb 12 21:59:22.555124 kubelet[2071]: I0212 21:59:22.555099    2071 kubelet_node_status.go:70] "Attempting to register node" node="172.31.17.60"
Feb 12 21:59:22.557933 kubelet[2071]: E0212 21:59:22.557907    2071 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.17.60"
Feb 12 21:59:22.558098 kubelet[2071]: E0212 21:59:22.557926    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763fbf08bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.17.60 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276841659, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 555022735, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events "172.31.17.60.17b33c763fbf08bb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.559918 kubelet[2071]: E0212 21:59:22.559817    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763fbf5a4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.17.60 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276862539, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 555044000, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events "172.31.17.60.17b33c763fbf5a4b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.561387 kubelet[2071]: E0212 21:59:22.561303    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763fbf6b30", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.17.60 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276866864, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 555048040, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events "172.31.17.60.17b33c763fbf6b30" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.914768 kubelet[2071]: E0212 21:59:22.914736    2071 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.17.60\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms"
Feb 12 21:59:22.959305 kubelet[2071]: I0212 21:59:22.959277    2071 kubelet_node_status.go:70] "Attempting to register node" node="172.31.17.60"
Feb 12 21:59:22.961756 kubelet[2071]: E0212 21:59:22.961728    2071 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.17.60"
Feb 12 21:59:22.961994 kubelet[2071]: E0212 21:59:22.961798    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763fbf08bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.17.60 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276841659, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 959220267, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events "172.31.17.60.17b33c763fbf08bb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.963838 kubelet[2071]: E0212 21:59:22.963762    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763fbf5a4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.17.60 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276862539, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 959230436, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events "172.31.17.60.17b33c763fbf5a4b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:22.964936 kubelet[2071]: E0212 21:59:22.964844    2071 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.60.17b33c763fbf6b30", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.17.60", UID:"172.31.17.60", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.17.60 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 276866864, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 22, 959234337, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'events "172.31.17.60.17b33c763fbf6b30" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 21:59:23.209686 kubelet[2071]: I0212 21:59:23.209030    2071 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Feb 12 21:59:23.220476 kubelet[2071]: E0212 21:59:23.220350    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:23.654815 kubelet[2071]: E0212 21:59:23.654775    2071 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.17.60" not found
Feb 12 21:59:23.727277 kubelet[2071]: E0212 21:59:23.727165    2071 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.17.60\" not found" node="172.31.17.60"
Feb 12 21:59:23.763114 kubelet[2071]: I0212 21:59:23.763072    2071 kubelet_node_status.go:70] "Attempting to register node" node="172.31.17.60"
Feb 12 21:59:23.771011 kubelet[2071]: I0212 21:59:23.770968    2071 kubelet_node_status.go:73] "Successfully registered node" node="172.31.17.60"
Feb 12 21:59:23.892943 kubelet[2071]: I0212 21:59:23.892908    2071 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Feb 12 21:59:23.893378 env[1642]: time="2024-02-12T21:59:23.893328712Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 12 21:59:23.893930 kubelet[2071]: I0212 21:59:23.893908    2071 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Feb 12 21:59:24.220807 kubelet[2071]: I0212 21:59:24.220764    2071 apiserver.go:52] "Watching apiserver"
Feb 12 21:59:24.221309 kubelet[2071]: E0212 21:59:24.220781    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:24.224406 kubelet[2071]: I0212 21:59:24.224372    2071 topology_manager.go:215] "Topology Admit Handler" podUID="5ca8068d-0c0c-446b-a378-9376700880de" podNamespace="kube-system" podName="cilium-x98n2"
Feb 12 21:59:24.224544 kubelet[2071]: I0212 21:59:24.224530    2071 topology_manager.go:215] "Topology Admit Handler" podUID="faec34da-cfca-4085-9d9a-22e9f4888f27" podNamespace="kube-system" podName="kube-proxy-4t6wb"
Feb 12 21:59:24.231656 systemd[1]: Created slice kubepods-besteffort-podfaec34da_cfca_4085_9d9a_22e9f4888f27.slice.
Feb 12 21:59:24.241910 sudo[1872]: pam_unix(sudo:session): session closed for user root
Feb 12 21:59:24.246116 kubelet[2071]: I0212 21:59:24.245988    2071 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 12 21:59:24.247218 systemd[1]: Created slice kubepods-burstable-pod5ca8068d_0c0c_446b_a378_9376700880de.slice.
Feb 12 21:59:24.257795 kubelet[2071]: I0212 21:59:24.257751    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-cilium-cgroup\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.257795 kubelet[2071]: I0212 21:59:24.257800    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-etc-cni-netd\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.258008 kubelet[2071]: I0212 21:59:24.257828    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ca8068d-0c0c-446b-a378-9376700880de-cilium-config-path\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.258008 kubelet[2071]: I0212 21:59:24.257854    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ca8068d-0c0c-446b-a378-9376700880de-hubble-tls\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.258008 kubelet[2071]: I0212 21:59:24.257907    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/faec34da-cfca-4085-9d9a-22e9f4888f27-xtables-lock\") pod \"kube-proxy-4t6wb\" (UID: \"faec34da-cfca-4085-9d9a-22e9f4888f27\") " pod="kube-system/kube-proxy-4t6wb"
Feb 12 21:59:24.258008 kubelet[2071]: I0212 21:59:24.257934    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/faec34da-cfca-4085-9d9a-22e9f4888f27-lib-modules\") pod \"kube-proxy-4t6wb\" (UID: \"faec34da-cfca-4085-9d9a-22e9f4888f27\") " pod="kube-system/kube-proxy-4t6wb"
Feb 12 21:59:24.258008 kubelet[2071]: I0212 21:59:24.257960    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-hostproc\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.258008 kubelet[2071]: I0212 21:59:24.257990    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-bpf-maps\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.258271 kubelet[2071]: I0212 21:59:24.258020    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-xtables-lock\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.258271 kubelet[2071]: I0212 21:59:24.258069    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ca8068d-0c0c-446b-a378-9376700880de-clustermesh-secrets\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.258271 kubelet[2071]: I0212 21:59:24.258103    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxdbn\" (UniqueName: \"kubernetes.io/projected/faec34da-cfca-4085-9d9a-22e9f4888f27-kube-api-access-kxdbn\") pod \"kube-proxy-4t6wb\" (UID: \"faec34da-cfca-4085-9d9a-22e9f4888f27\") " pod="kube-system/kube-proxy-4t6wb"
Feb 12 21:59:24.258271 kubelet[2071]: I0212 21:59:24.258132    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-cilium-run\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.258271 kubelet[2071]: I0212 21:59:24.258162    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-lib-modules\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.258660 kubelet[2071]: I0212 21:59:24.258194    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-host-proc-sys-net\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.258660 kubelet[2071]: I0212 21:59:24.258237    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-host-proc-sys-kernel\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.258660 kubelet[2071]: I0212 21:59:24.258269    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw5q8\" (UniqueName: \"kubernetes.io/projected/5ca8068d-0c0c-446b-a378-9376700880de-kube-api-access-hw5q8\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.258660 kubelet[2071]: I0212 21:59:24.258301    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/faec34da-cfca-4085-9d9a-22e9f4888f27-kube-proxy\") pod \"kube-proxy-4t6wb\" (UID: \"faec34da-cfca-4085-9d9a-22e9f4888f27\") " pod="kube-system/kube-proxy-4t6wb"
Feb 12 21:59:24.258660 kubelet[2071]: I0212 21:59:24.258330    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-cni-path\") pod \"cilium-x98n2\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") " pod="kube-system/cilium-x98n2"
Feb 12 21:59:24.266249 sshd[1868]: pam_unix(sshd:session): session closed for user core
Feb 12 21:59:24.270568 systemd[1]: sshd@4-172.31.17.60:22-139.178.89.65:40970.service: Deactivated successfully.
Feb 12 21:59:24.271542 systemd[1]: session-5.scope: Deactivated successfully.
Feb 12 21:59:24.272444 systemd-logind[1635]: Session 5 logged out. Waiting for processes to exit.
Feb 12 21:59:24.274023 systemd-logind[1635]: Removed session 5.
Feb 12 21:59:24.545178 env[1642]: time="2024-02-12T21:59:24.545052783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4t6wb,Uid:faec34da-cfca-4085-9d9a-22e9f4888f27,Namespace:kube-system,Attempt:0,}"
Feb 12 21:59:24.560541 env[1642]: time="2024-02-12T21:59:24.560492430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x98n2,Uid:5ca8068d-0c0c-446b-a378-9376700880de,Namespace:kube-system,Attempt:0,}"
Feb 12 21:59:25.056379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2998186833.mount: Deactivated successfully.
Feb 12 21:59:25.066470 env[1642]: time="2024-02-12T21:59:25.066421891Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:25.070614 env[1642]: time="2024-02-12T21:59:25.070562530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:25.072495 env[1642]: time="2024-02-12T21:59:25.072450774Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:25.079277 env[1642]: time="2024-02-12T21:59:25.079222024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:25.081343 env[1642]: time="2024-02-12T21:59:25.081296773Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:25.084048 env[1642]: time="2024-02-12T21:59:25.084009936Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:25.085041 env[1642]: time="2024-02-12T21:59:25.085002199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:25.086282 env[1642]: time="2024-02-12T21:59:25.086249368Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:25.111800 env[1642]: time="2024-02-12T21:59:25.111724382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 21:59:25.112726 env[1642]: time="2024-02-12T21:59:25.111775350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 21:59:25.112726 env[1642]: time="2024-02-12T21:59:25.111790069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 21:59:25.112726 env[1642]: time="2024-02-12T21:59:25.111950374Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e36fa6e1c4d24cc65191aafd67dffc5727d0e22c2bf2f0fc6be38584aeb142a4 pid=2121 runtime=io.containerd.runc.v2
Feb 12 21:59:25.125590 env[1642]: time="2024-02-12T21:59:25.125488601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 21:59:25.125590 env[1642]: time="2024-02-12T21:59:25.125545522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 21:59:25.125590 env[1642]: time="2024-02-12T21:59:25.125563049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 21:59:25.126010 env[1642]: time="2024-02-12T21:59:25.125949718Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e pid=2140 runtime=io.containerd.runc.v2
Feb 12 21:59:25.135330 systemd[1]: Started cri-containerd-e36fa6e1c4d24cc65191aafd67dffc5727d0e22c2bf2f0fc6be38584aeb142a4.scope.
Feb 12 21:59:25.153957 systemd[1]: Started cri-containerd-2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e.scope.
Feb 12 21:59:25.199536 env[1642]: time="2024-02-12T21:59:25.199488273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4t6wb,Uid:faec34da-cfca-4085-9d9a-22e9f4888f27,Namespace:kube-system,Attempt:0,} returns sandbox id \"e36fa6e1c4d24cc65191aafd67dffc5727d0e22c2bf2f0fc6be38584aeb142a4\""
Feb 12 21:59:25.202654 env[1642]: time="2024-02-12T21:59:25.202608733Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\""
Feb 12 21:59:25.210546 env[1642]: time="2024-02-12T21:59:25.208991596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x98n2,Uid:5ca8068d-0c0c-446b-a378-9376700880de,Namespace:kube-system,Attempt:0,} returns sandbox id \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\""
Feb 12 21:59:25.221949 kubelet[2071]: E0212 21:59:25.221919    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:26.223238 kubelet[2071]: E0212 21:59:26.223012    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:26.336995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2673667225.mount: Deactivated successfully.
Feb 12 21:59:27.063592 env[1642]: time="2024-02-12T21:59:27.063537309Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:27.066446 env[1642]: time="2024-02-12T21:59:27.066401248Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:27.072801 env[1642]: time="2024-02-12T21:59:27.072715052Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:27.076158 env[1642]: time="2024-02-12T21:59:27.076114646Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:27.078053 env[1642]: time="2024-02-12T21:59:27.077827093Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\""
Feb 12 21:59:27.082547 env[1642]: time="2024-02-12T21:59:27.082495885Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb 12 21:59:27.085829 env[1642]: time="2024-02-12T21:59:27.085782381Z" level=info msg="CreateContainer within sandbox \"e36fa6e1c4d24cc65191aafd67dffc5727d0e22c2bf2f0fc6be38584aeb142a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 12 21:59:27.105086 env[1642]: time="2024-02-12T21:59:27.105031046Z" level=info msg="CreateContainer within sandbox \"e36fa6e1c4d24cc65191aafd67dffc5727d0e22c2bf2f0fc6be38584aeb142a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"28aa69693309ab8813964dff92e4324dafe2819e9dee8b012454396c3af40c00\""
Feb 12 21:59:27.106052 env[1642]: time="2024-02-12T21:59:27.106017551Z" level=info msg="StartContainer for \"28aa69693309ab8813964dff92e4324dafe2819e9dee8b012454396c3af40c00\""
Feb 12 21:59:27.136686 systemd[1]: run-containerd-runc-k8s.io-28aa69693309ab8813964dff92e4324dafe2819e9dee8b012454396c3af40c00-runc.56PnCV.mount: Deactivated successfully.
Feb 12 21:59:27.138663 systemd[1]: Started cri-containerd-28aa69693309ab8813964dff92e4324dafe2819e9dee8b012454396c3af40c00.scope.
Feb 12 21:59:27.184792 env[1642]: time="2024-02-12T21:59:27.184736854Z" level=info msg="StartContainer for \"28aa69693309ab8813964dff92e4324dafe2819e9dee8b012454396c3af40c00\" returns successfully"
Feb 12 21:59:27.223376 kubelet[2071]: E0212 21:59:27.223338    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:27.424447 kubelet[2071]: I0212 21:59:27.424408    2071 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4t6wb" podStartSLOduration=2.546500299 podCreationTimestamp="2024-02-12 21:59:23 +0000 UTC" firstStartedPulling="2024-02-12 21:59:25.201806847 +0000 UTC m=+3.475613205" lastFinishedPulling="2024-02-12 21:59:27.079636192 +0000 UTC m=+5.353442560" observedRunningTime="2024-02-12 21:59:27.424049291 +0000 UTC m=+5.697855670" watchObservedRunningTime="2024-02-12 21:59:27.424329654 +0000 UTC m=+5.698136031"
Feb 12 21:59:28.223751 kubelet[2071]: E0212 21:59:28.223689    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:29.223917 kubelet[2071]: E0212 21:59:29.223853    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:30.225088 kubelet[2071]: E0212 21:59:30.225015    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:31.225763 kubelet[2071]: E0212 21:59:31.225722    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:32.226083 kubelet[2071]: E0212 21:59:32.226016    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:33.122371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2032579438.mount: Deactivated successfully.
Feb 12 21:59:33.226340 kubelet[2071]: E0212 21:59:33.226284    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:34.226837 kubelet[2071]: E0212 21:59:34.226801    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:35.227022 kubelet[2071]: E0212 21:59:35.226984    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:36.227996 kubelet[2071]: E0212 21:59:36.227904    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:36.673885 env[1642]: time="2024-02-12T21:59:36.673783738Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:36.676427 env[1642]: time="2024-02-12T21:59:36.676383575Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:36.678695 env[1642]: time="2024-02-12T21:59:36.678657092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:36.679251 env[1642]: time="2024-02-12T21:59:36.679211025Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Feb 12 21:59:36.683063 env[1642]: time="2024-02-12T21:59:36.683025359Z" level=info msg="CreateContainer within sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 12 21:59:36.701710 env[1642]: time="2024-02-12T21:59:36.701650757Z" level=info msg="CreateContainer within sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c\""
Feb 12 21:59:36.704022 env[1642]: time="2024-02-12T21:59:36.703978750Z" level=info msg="StartContainer for \"9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c\""
Feb 12 21:59:36.728546 systemd[1]: Started cri-containerd-9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c.scope.
Feb 12 21:59:36.737309 systemd[1]: run-containerd-runc-k8s.io-9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c-runc.0TgLa2.mount: Deactivated successfully.
Feb 12 21:59:36.768079 env[1642]: time="2024-02-12T21:59:36.768001138Z" level=info msg="StartContainer for \"9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c\" returns successfully"
Feb 12 21:59:36.776220 systemd[1]: cri-containerd-9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c.scope: Deactivated successfully.
Feb 12 21:59:36.979974 env[1642]: time="2024-02-12T21:59:36.979833285Z" level=info msg="shim disconnected" id=9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c
Feb 12 21:59:36.979974 env[1642]: time="2024-02-12T21:59:36.979903817Z" level=warning msg="cleaning up after shim disconnected" id=9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c namespace=k8s.io
Feb 12 21:59:36.979974 env[1642]: time="2024-02-12T21:59:36.979917821Z" level=info msg="cleaning up dead shim"
Feb 12 21:59:36.989628 env[1642]: time="2024-02-12T21:59:36.989580514Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:59:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2407 runtime=io.containerd.runc.v2\n"
Feb 12 21:59:37.229160 kubelet[2071]: E0212 21:59:37.229114    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:37.450986 env[1642]: time="2024-02-12T21:59:37.450942884Z" level=info msg="CreateContainer within sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 12 21:59:37.467091 env[1642]: time="2024-02-12T21:59:37.467038637Z" level=info msg="CreateContainer within sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5\""
Feb 12 21:59:37.467825 env[1642]: time="2024-02-12T21:59:37.467788984Z" level=info msg="StartContainer for \"975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5\""
Feb 12 21:59:37.489954 systemd[1]: Started cri-containerd-975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5.scope.
Feb 12 21:59:37.532765 env[1642]: time="2024-02-12T21:59:37.532706223Z" level=info msg="StartContainer for \"975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5\" returns successfully"
Feb 12 21:59:37.542832 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 12 21:59:37.543181 systemd[1]: Stopped systemd-sysctl.service.
Feb 12 21:59:37.543859 systemd[1]: Stopping systemd-sysctl.service...
Feb 12 21:59:37.550086 systemd[1]: Starting systemd-sysctl.service...
Feb 12 21:59:37.550465 systemd[1]: cri-containerd-975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5.scope: Deactivated successfully.
Feb 12 21:59:37.559735 systemd[1]: Finished systemd-sysctl.service.
Feb 12 21:59:37.590558 env[1642]: time="2024-02-12T21:59:37.590494770Z" level=info msg="shim disconnected" id=975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5
Feb 12 21:59:37.590558 env[1642]: time="2024-02-12T21:59:37.590551877Z" level=warning msg="cleaning up after shim disconnected" id=975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5 namespace=k8s.io
Feb 12 21:59:37.590558 env[1642]: time="2024-02-12T21:59:37.590564959Z" level=info msg="cleaning up dead shim"
Feb 12 21:59:37.600465 env[1642]: time="2024-02-12T21:59:37.600346784Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:59:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2468 runtime=io.containerd.runc.v2\n"
Feb 12 21:59:37.692708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c-rootfs.mount: Deactivated successfully.
Feb 12 21:59:38.232257 kubelet[2071]: E0212 21:59:38.232183    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:38.454772 env[1642]: time="2024-02-12T21:59:38.454731075Z" level=info msg="CreateContainer within sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 12 21:59:38.475348 env[1642]: time="2024-02-12T21:59:38.475303165Z" level=info msg="CreateContainer within sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc\""
Feb 12 21:59:38.476177 env[1642]: time="2024-02-12T21:59:38.476143072Z" level=info msg="StartContainer for \"d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc\""
Feb 12 21:59:38.514043 systemd[1]: Started cri-containerd-d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc.scope.
Feb 12 21:59:38.557537 systemd[1]: cri-containerd-d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc.scope: Deactivated successfully.
Feb 12 21:59:38.562154 env[1642]: time="2024-02-12T21:59:38.562106485Z" level=info msg="StartContainer for \"d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc\" returns successfully"
Feb 12 21:59:38.597001 env[1642]: time="2024-02-12T21:59:38.596951958Z" level=info msg="shim disconnected" id=d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc
Feb 12 21:59:38.597001 env[1642]: time="2024-02-12T21:59:38.596997535Z" level=warning msg="cleaning up after shim disconnected" id=d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc namespace=k8s.io
Feb 12 21:59:38.597541 env[1642]: time="2024-02-12T21:59:38.597012751Z" level=info msg="cleaning up dead shim"
Feb 12 21:59:38.607459 env[1642]: time="2024-02-12T21:59:38.607327316Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:59:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2528 runtime=io.containerd.runc.v2\n"
Feb 12 21:59:38.692395 systemd[1]: run-containerd-runc-k8s.io-d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc-runc.3MLQRH.mount: Deactivated successfully.
Feb 12 21:59:38.692525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc-rootfs.mount: Deactivated successfully.
Feb 12 21:59:39.232742 kubelet[2071]: E0212 21:59:39.232702    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:39.471647 env[1642]: time="2024-02-12T21:59:39.471604308Z" level=info msg="CreateContainer within sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 12 21:59:39.488545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1500689259.mount: Deactivated successfully.
Feb 12 21:59:39.496540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1157824444.mount: Deactivated successfully.
Feb 12 21:59:39.504164 env[1642]: time="2024-02-12T21:59:39.504118734Z" level=info msg="CreateContainer within sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8\""
Feb 12 21:59:39.504893 env[1642]: time="2024-02-12T21:59:39.504844389Z" level=info msg="StartContainer for \"e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8\""
Feb 12 21:59:39.540445 systemd[1]: Started cri-containerd-e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8.scope.
Feb 12 21:59:39.563837 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 12 21:59:39.591904 systemd[1]: cri-containerd-e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8.scope: Deactivated successfully.
Feb 12 21:59:39.596057 env[1642]: time="2024-02-12T21:59:39.595960296Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ca8068d_0c0c_446b_a378_9376700880de.slice/cri-containerd-e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8.scope/memory.events\": no such file or directory"
Feb 12 21:59:39.597386 env[1642]: time="2024-02-12T21:59:39.597357414Z" level=info msg="StartContainer for \"e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8\" returns successfully"
Feb 12 21:59:39.639863 env[1642]: time="2024-02-12T21:59:39.639797727Z" level=info msg="shim disconnected" id=e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8
Feb 12 21:59:39.639863 env[1642]: time="2024-02-12T21:59:39.639854654Z" level=warning msg="cleaning up after shim disconnected" id=e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8 namespace=k8s.io
Feb 12 21:59:39.639863 env[1642]: time="2024-02-12T21:59:39.639868348Z" level=info msg="cleaning up dead shim"
Feb 12 21:59:39.650434 env[1642]: time="2024-02-12T21:59:39.650385457Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:59:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2588 runtime=io.containerd.runc.v2\n"
Feb 12 21:59:40.233519 kubelet[2071]: E0212 21:59:40.233481    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:40.467537 env[1642]: time="2024-02-12T21:59:40.467498149Z" level=info msg="CreateContainer within sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 12 21:59:40.504854 env[1642]: time="2024-02-12T21:59:40.504732052Z" level=info msg="CreateContainer within sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\""
Feb 12 21:59:40.506463 env[1642]: time="2024-02-12T21:59:40.506202895Z" level=info msg="StartContainer for \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\""
Feb 12 21:59:40.542103 systemd[1]: Started cri-containerd-5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67.scope.
Feb 12 21:59:40.610914 env[1642]: time="2024-02-12T21:59:40.610605827Z" level=info msg="StartContainer for \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\" returns successfully"
Feb 12 21:59:40.692829 systemd[1]: run-containerd-runc-k8s.io-5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67-runc.vCo3Ez.mount: Deactivated successfully.
Feb 12 21:59:40.816241 kubelet[2071]: I0212 21:59:40.816109    2071 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Feb 12 21:59:41.219167 kernel: Initializing XFRM netlink socket
Feb 12 21:59:41.234268 kubelet[2071]: E0212 21:59:41.234232    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:41.490598 kubelet[2071]: I0212 21:59:41.490472    2071 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x98n2" podStartSLOduration=7.02405297 podCreationTimestamp="2024-02-12 21:59:23 +0000 UTC" firstStartedPulling="2024-02-12 21:59:25.213171453 +0000 UTC m=+3.486977820" lastFinishedPulling="2024-02-12 21:59:36.679531306 +0000 UTC m=+14.953337676" observedRunningTime="2024-02-12 21:59:41.490405413 +0000 UTC m=+19.764211790" watchObservedRunningTime="2024-02-12 21:59:41.490412826 +0000 UTC m=+19.764219197"
Feb 12 21:59:42.219751 kubelet[2071]: E0212 21:59:42.219706    2071 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:42.235005 kubelet[2071]: E0212 21:59:42.234954    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:42.855066 kubelet[2071]: I0212 21:59:42.855022    2071 topology_manager.go:215] "Topology Admit Handler" podUID="117269c4-bd98-42ac-923f-9df7215b5d96" podNamespace="default" podName="nginx-deployment-6d5f899847-gr85z"
Feb 12 21:59:42.862084 systemd[1]: Created slice kubepods-besteffort-pod117269c4_bd98_42ac_923f_9df7215b5d96.slice.
Feb 12 21:59:42.905478 kubelet[2071]: I0212 21:59:42.905423    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v6lz\" (UniqueName: \"kubernetes.io/projected/117269c4-bd98-42ac-923f-9df7215b5d96-kube-api-access-2v6lz\") pod \"nginx-deployment-6d5f899847-gr85z\" (UID: \"117269c4-bd98-42ac-923f-9df7215b5d96\") " pod="default/nginx-deployment-6d5f899847-gr85z"
Feb 12 21:59:42.952395 systemd-networkd[1456]: cilium_host: Link UP
Feb 12 21:59:42.952551 systemd-networkd[1456]: cilium_net: Link UP
Feb 12 21:59:42.955374 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready
Feb 12 21:59:42.955493 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Feb 12 21:59:42.958314 systemd-networkd[1456]: cilium_net: Gained carrier
Feb 12 21:59:42.959849 (udev-worker)[2734]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 21:59:42.961583 systemd-networkd[1456]: cilium_host: Gained carrier
Feb 12 21:59:42.965152 (udev-worker)[2733]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 21:59:43.159822 systemd-networkd[1456]: cilium_host: Gained IPv6LL
Feb 12 21:59:43.167354 env[1642]: time="2024-02-12T21:59:43.167306758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-gr85z,Uid:117269c4-bd98-42ac-923f-9df7215b5d96,Namespace:default,Attempt:0,}"
Feb 12 21:59:43.185091 systemd-networkd[1456]: cilium_vxlan: Link UP
Feb 12 21:59:43.185148 systemd-networkd[1456]: cilium_vxlan: Gained carrier
Feb 12 21:59:43.242024 kubelet[2071]: E0212 21:59:43.241987    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:43.500916 kernel: NET: Registered PF_ALG protocol family
Feb 12 21:59:43.529339 systemd-networkd[1456]: cilium_net: Gained IPv6LL
Feb 12 21:59:44.243372 kubelet[2071]: E0212 21:59:44.243334    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:44.349685 (udev-worker)[2754]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 21:59:44.352037 systemd-networkd[1456]: lxc_health: Link UP
Feb 12 21:59:44.361089 systemd-networkd[1456]: lxc_health: Gained carrier
Feb 12 21:59:44.361901 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb 12 21:59:44.765340 systemd-networkd[1456]: lxc5d4ca287577a: Link UP
Feb 12 21:59:44.787485 kernel: eth0: renamed from tmped20e
Feb 12 21:59:44.792961 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5d4ca287577a: link becomes ready
Feb 12 21:59:44.792411 systemd-networkd[1456]: lxc5d4ca287577a: Gained carrier
Feb 12 21:59:45.129095 systemd-networkd[1456]: cilium_vxlan: Gained IPv6LL
Feb 12 21:59:45.244865 kubelet[2071]: E0212 21:59:45.244783    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:45.705113 systemd-networkd[1456]: lxc_health: Gained IPv6LL
Feb 12 21:59:46.245345 kubelet[2071]: E0212 21:59:46.245305    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:46.362018 systemd-networkd[1456]: lxc5d4ca287577a: Gained IPv6LL
Feb 12 21:59:47.246322 kubelet[2071]: E0212 21:59:47.246276    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:48.247161 kubelet[2071]: E0212 21:59:48.247115    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:48.934025 kubelet[2071]: I0212 21:59:48.933987    2071 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 12 21:59:49.248363 kubelet[2071]: E0212 21:59:49.248246    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:50.103532 env[1642]: time="2024-02-12T21:59:50.103432645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 21:59:50.104057 env[1642]: time="2024-02-12T21:59:50.103501019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 21:59:50.104057 env[1642]: time="2024-02-12T21:59:50.103517053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 21:59:50.104057 env[1642]: time="2024-02-12T21:59:50.103767519Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed20eaef8948edc5a48edf4aa201cdc8edcdcc04978849a31c4e728e7fe09565 pid=3111 runtime=io.containerd.runc.v2
Feb 12 21:59:50.136667 systemd[1]: run-containerd-runc-k8s.io-ed20eaef8948edc5a48edf4aa201cdc8edcdcc04978849a31c4e728e7fe09565-runc.FJQiab.mount: Deactivated successfully.
Feb 12 21:59:50.141131 systemd[1]: Started cri-containerd-ed20eaef8948edc5a48edf4aa201cdc8edcdcc04978849a31c4e728e7fe09565.scope.
Feb 12 21:59:50.205189 env[1642]: time="2024-02-12T21:59:50.205138335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-gr85z,Uid:117269c4-bd98-42ac-923f-9df7215b5d96,Namespace:default,Attempt:0,} returns sandbox id \"ed20eaef8948edc5a48edf4aa201cdc8edcdcc04978849a31c4e728e7fe09565\""
Feb 12 21:59:50.207127 env[1642]: time="2024-02-12T21:59:50.207088894Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 12 21:59:50.249443 kubelet[2071]: E0212 21:59:50.249405    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:51.250526 kubelet[2071]: E0212 21:59:51.250478    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:52.250710 kubelet[2071]: E0212 21:59:52.250677    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:52.477698 amazon-ssm-agent[1623]: 2024-02-12 21:59:52 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated
Feb 12 21:59:53.251000 kubelet[2071]: E0212 21:59:53.250948    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:54.069985 update_engine[1636]: I0212 21:59:54.069940  1636 update_attempter.cc:509] Updating boot flags...
Feb 12 21:59:54.251435 kubelet[2071]: E0212 21:59:54.251399    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:55.251919 kubelet[2071]: E0212 21:59:55.251855    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:55.453137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3516258017.mount: Deactivated successfully.
Feb 12 21:59:56.252077 kubelet[2071]: E0212 21:59:56.252038    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:56.718180 env[1642]: time="2024-02-12T21:59:56.718119961Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:56.721022 env[1642]: time="2024-02-12T21:59:56.720980331Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:56.725814 env[1642]: time="2024-02-12T21:59:56.725771207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:56.730098 env[1642]: time="2024-02-12T21:59:56.730043978Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\""
Feb 12 21:59:56.731493 env[1642]: time="2024-02-12T21:59:56.731460105Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:56.732490 env[1642]: time="2024-02-12T21:59:56.732454419Z" level=info msg="CreateContainer within sandbox \"ed20eaef8948edc5a48edf4aa201cdc8edcdcc04978849a31c4e728e7fe09565\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Feb 12 21:59:56.752171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1076880784.mount: Deactivated successfully.
Feb 12 21:59:56.760643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954847102.mount: Deactivated successfully.
Feb 12 21:59:56.765474 env[1642]: time="2024-02-12T21:59:56.765424137Z" level=info msg="CreateContainer within sandbox \"ed20eaef8948edc5a48edf4aa201cdc8edcdcc04978849a31c4e728e7fe09565\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"de678163c6049b7d8696b21587524b1c31b82d083032f0d7fe7f4838715d2e04\""
Feb 12 21:59:56.766377 env[1642]: time="2024-02-12T21:59:56.766343973Z" level=info msg="StartContainer for \"de678163c6049b7d8696b21587524b1c31b82d083032f0d7fe7f4838715d2e04\""
Feb 12 21:59:56.801113 systemd[1]: Started cri-containerd-de678163c6049b7d8696b21587524b1c31b82d083032f0d7fe7f4838715d2e04.scope.
Feb 12 21:59:56.839171 env[1642]: time="2024-02-12T21:59:56.839119955Z" level=info msg="StartContainer for \"de678163c6049b7d8696b21587524b1c31b82d083032f0d7fe7f4838715d2e04\" returns successfully"
Feb 12 21:59:57.252729 kubelet[2071]: E0212 21:59:57.252681    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:57.532093 kubelet[2071]: I0212 21:59:57.531962    2071 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-gr85z" podStartSLOduration=9.008132307 podCreationTimestamp="2024-02-12 21:59:42 +0000 UTC" firstStartedPulling="2024-02-12 21:59:50.206593803 +0000 UTC m=+28.480400157" lastFinishedPulling="2024-02-12 21:59:56.730376882 +0000 UTC m=+35.004183245" observedRunningTime="2024-02-12 21:59:57.53174588 +0000 UTC m=+35.805552258" watchObservedRunningTime="2024-02-12 21:59:57.531915395 +0000 UTC m=+35.805721768"
Feb 12 21:59:58.252900 kubelet[2071]: E0212 21:59:58.252842    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 21:59:59.253491 kubelet[2071]: E0212 21:59:59.253435    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:00.253719 kubelet[2071]: E0212 22:00:00.253678    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:01.254355 kubelet[2071]: E0212 22:00:01.254299    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:02.219517 kubelet[2071]: E0212 22:00:02.219464    2071 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:02.255031 kubelet[2071]: E0212 22:00:02.254892    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:03.185690 kubelet[2071]: I0212 22:00:03.185650    2071 topology_manager.go:215] "Topology Admit Handler" podUID="ff1839d2-cbaa-4387-b276-d2120d2c0c26" podNamespace="default" podName="nfs-server-provisioner-0"
Feb 12 22:00:03.207772 systemd[1]: Created slice kubepods-besteffort-podff1839d2_cbaa_4387_b276_d2120d2c0c26.slice.
Feb 12 22:00:03.256040 kubelet[2071]: E0212 22:00:03.255970    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:03.370524 kubelet[2071]: I0212 22:00:03.370467    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtm8l\" (UniqueName: \"kubernetes.io/projected/ff1839d2-cbaa-4387-b276-d2120d2c0c26-kube-api-access-qtm8l\") pod \"nfs-server-provisioner-0\" (UID: \"ff1839d2-cbaa-4387-b276-d2120d2c0c26\") " pod="default/nfs-server-provisioner-0"
Feb 12 22:00:03.370524 kubelet[2071]: I0212 22:00:03.370529    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ff1839d2-cbaa-4387-b276-d2120d2c0c26-data\") pod \"nfs-server-provisioner-0\" (UID: \"ff1839d2-cbaa-4387-b276-d2120d2c0c26\") " pod="default/nfs-server-provisioner-0"
Feb 12 22:00:03.519532 env[1642]: time="2024-02-12T22:00:03.519401349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ff1839d2-cbaa-4387-b276-d2120d2c0c26,Namespace:default,Attempt:0,}"
Feb 12 22:00:03.593889 (udev-worker)[3389]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 22:00:03.595607 (udev-worker)[3388]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 22:00:03.595703 systemd-networkd[1456]: lxc07ec6456fefb: Link UP
Feb 12 22:00:03.608164 kernel: eth0: renamed from tmp7aa25
Feb 12 22:00:03.619292 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb 12 22:00:03.619413 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc07ec6456fefb: link becomes ready
Feb 12 22:00:03.619640 systemd-networkd[1456]: lxc07ec6456fefb: Gained carrier
Feb 12 22:00:04.002308 env[1642]: time="2024-02-12T22:00:04.002222947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 22:00:04.002529 env[1642]: time="2024-02-12T22:00:04.002265003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 22:00:04.002529 env[1642]: time="2024-02-12T22:00:04.002295976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 22:00:04.002692 env[1642]: time="2024-02-12T22:00:04.002534783Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7aa252e99ca390f9aeedcc3d7d4e5111b73321b67f55da20d632e6ceffc001c7 pid=3418 runtime=io.containerd.runc.v2
Feb 12 22:00:04.024970 systemd[1]: Started cri-containerd-7aa252e99ca390f9aeedcc3d7d4e5111b73321b67f55da20d632e6ceffc001c7.scope.
Feb 12 22:00:04.104685 env[1642]: time="2024-02-12T22:00:04.104376049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ff1839d2-cbaa-4387-b276-d2120d2c0c26,Namespace:default,Attempt:0,} returns sandbox id \"7aa252e99ca390f9aeedcc3d7d4e5111b73321b67f55da20d632e6ceffc001c7\""
Feb 12 22:00:04.123900 env[1642]: time="2024-02-12T22:00:04.123546906Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Feb 12 22:00:04.257805 kubelet[2071]: E0212 22:00:04.257680    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:04.485666 systemd[1]: run-containerd-runc-k8s.io-7aa252e99ca390f9aeedcc3d7d4e5111b73321b67f55da20d632e6ceffc001c7-runc.MVB5J6.mount: Deactivated successfully.
Feb 12 22:00:04.905286 systemd-networkd[1456]: lxc07ec6456fefb: Gained IPv6LL
Feb 12 22:00:05.258503 kubelet[2071]: E0212 22:00:05.258397    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:06.258889 kubelet[2071]: E0212 22:00:06.258814    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:07.260100 kubelet[2071]: E0212 22:00:07.259894    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:07.401787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount833427487.mount: Deactivated successfully.
Feb 12 22:00:08.260981 kubelet[2071]: E0212 22:00:08.260941    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:09.261587 kubelet[2071]: E0212 22:00:09.261533    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:10.262306 kubelet[2071]: E0212 22:00:10.262230    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:10.263412 env[1642]: time="2024-02-12T22:00:10.263367889Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 22:00:10.267556 env[1642]: time="2024-02-12T22:00:10.267513426Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 22:00:10.270072 env[1642]: time="2024-02-12T22:00:10.270027302Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 22:00:10.272768 env[1642]: time="2024-02-12T22:00:10.272553646Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 22:00:10.273726 env[1642]: time="2024-02-12T22:00:10.273635993Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\""
Feb 12 22:00:10.276753 env[1642]: time="2024-02-12T22:00:10.276713321Z" level=info msg="CreateContainer within sandbox \"7aa252e99ca390f9aeedcc3d7d4e5111b73321b67f55da20d632e6ceffc001c7\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Feb 12 22:00:10.290528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292546091.mount: Deactivated successfully.
Feb 12 22:00:10.302240 env[1642]: time="2024-02-12T22:00:10.302184397Z" level=info msg="CreateContainer within sandbox \"7aa252e99ca390f9aeedcc3d7d4e5111b73321b67f55da20d632e6ceffc001c7\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"5df0305c7c6f41b2a60292d5baa6b0846b0f5cade6e39db6359335a53d1677bc\""
Feb 12 22:00:10.303075 env[1642]: time="2024-02-12T22:00:10.303043082Z" level=info msg="StartContainer for \"5df0305c7c6f41b2a60292d5baa6b0846b0f5cade6e39db6359335a53d1677bc\""
Feb 12 22:00:10.342022 systemd[1]: Started cri-containerd-5df0305c7c6f41b2a60292d5baa6b0846b0f5cade6e39db6359335a53d1677bc.scope.
Feb 12 22:00:10.407097 env[1642]: time="2024-02-12T22:00:10.407015383Z" level=info msg="StartContainer for \"5df0305c7c6f41b2a60292d5baa6b0846b0f5cade6e39db6359335a53d1677bc\" returns successfully"
Feb 12 22:00:11.262788 kubelet[2071]: E0212 22:00:11.262735    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:12.263289 kubelet[2071]: E0212 22:00:12.263181    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:13.264307 kubelet[2071]: E0212 22:00:13.264255    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:14.265458 kubelet[2071]: E0212 22:00:14.265409    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:15.266645 kubelet[2071]: E0212 22:00:15.266552    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:16.267693 kubelet[2071]: E0212 22:00:16.267642    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:17.268010 kubelet[2071]: E0212 22:00:17.267956    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:18.269209 kubelet[2071]: E0212 22:00:18.269157    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:19.269679 kubelet[2071]: E0212 22:00:19.269628    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:20.270595 kubelet[2071]: E0212 22:00:20.270553    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:20.360154 kubelet[2071]: I0212 22:00:20.359962    2071 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.205097238 podCreationTimestamp="2024-02-12 22:00:03 +0000 UTC" firstStartedPulling="2024-02-12 22:00:04.119199938 +0000 UTC m=+42.393006305" lastFinishedPulling="2024-02-12 22:00:10.274028865 +0000 UTC m=+48.547835234" observedRunningTime="2024-02-12 22:00:10.574036643 +0000 UTC m=+48.847843051" watchObservedRunningTime="2024-02-12 22:00:20.359926167 +0000 UTC m=+58.633732540"
Feb 12 22:00:20.360398 kubelet[2071]: I0212 22:00:20.360266    2071 topology_manager.go:215] "Topology Admit Handler" podUID="8a3eca71-bd39-4dfb-af21-f6643100fddd" podNamespace="default" podName="test-pod-1"
Feb 12 22:00:20.366206 systemd[1]: Created slice kubepods-besteffort-pod8a3eca71_bd39_4dfb_af21_f6643100fddd.slice.
Feb 12 22:00:20.420951 kubelet[2071]: I0212 22:00:20.420906    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2q4x\" (UniqueName: \"kubernetes.io/projected/8a3eca71-bd39-4dfb-af21-f6643100fddd-kube-api-access-w2q4x\") pod \"test-pod-1\" (UID: \"8a3eca71-bd39-4dfb-af21-f6643100fddd\") " pod="default/test-pod-1"
Feb 12 22:00:20.421216 kubelet[2071]: I0212 22:00:20.420963    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-70c7f047-64ec-4ebf-a0ce-6bcdebd830f6\" (UniqueName: \"kubernetes.io/nfs/8a3eca71-bd39-4dfb-af21-f6643100fddd-pvc-70c7f047-64ec-4ebf-a0ce-6bcdebd830f6\") pod \"test-pod-1\" (UID: \"8a3eca71-bd39-4dfb-af21-f6643100fddd\") " pod="default/test-pod-1"
Feb 12 22:00:20.606071 kernel: FS-Cache: Loaded
Feb 12 22:00:20.665106 kernel: RPC: Registered named UNIX socket transport module.
Feb 12 22:00:20.665323 kernel: RPC: Registered udp transport module.
Feb 12 22:00:20.665365 kernel: RPC: Registered tcp transport module.
Feb 12 22:00:20.665395 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 12 22:00:20.730908 kernel: FS-Cache: Netfs 'nfs' registered for caching
Feb 12 22:00:21.053276 kernel: NFS: Registering the id_resolver key type
Feb 12 22:00:21.053675 kernel: Key type id_resolver registered
Feb 12 22:00:21.053750 kernel: Key type id_legacy registered
Feb 12 22:00:21.097779 nfsidmap[3570]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal'
Feb 12 22:00:21.102333 nfsidmap[3571]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal'
Feb 12 22:00:21.270450 env[1642]: time="2024-02-12T22:00:21.270400993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8a3eca71-bd39-4dfb-af21-f6643100fddd,Namespace:default,Attempt:0,}"
Feb 12 22:00:21.271515 kubelet[2071]: E0212 22:00:21.271455    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:21.311924 (udev-worker)[3567]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 22:00:21.311924 (udev-worker)[3564]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 22:00:21.315557 systemd-networkd[1456]: lxcaf0ac9fc176f: Link UP
Feb 12 22:00:21.320910 kernel: eth0: renamed from tmp4560a
Feb 12 22:00:21.330356 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb 12 22:00:21.330479 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcaf0ac9fc176f: link becomes ready
Feb 12 22:00:21.330314 systemd-networkd[1456]: lxcaf0ac9fc176f: Gained carrier
Feb 12 22:00:21.595136 env[1642]: time="2024-02-12T22:00:21.594954464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 22:00:21.595371 env[1642]: time="2024-02-12T22:00:21.595091714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 22:00:21.595371 env[1642]: time="2024-02-12T22:00:21.595124106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 22:00:21.595539 env[1642]: time="2024-02-12T22:00:21.595387895Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4560a4de8e00ea303a9bf1b236f81690b08360d8b22757d22b3b7c1eada6321a pid=3597 runtime=io.containerd.runc.v2
Feb 12 22:00:21.622064 systemd[1]: run-containerd-runc-k8s.io-4560a4de8e00ea303a9bf1b236f81690b08360d8b22757d22b3b7c1eada6321a-runc.d57EN0.mount: Deactivated successfully.
Feb 12 22:00:21.628538 systemd[1]: Started cri-containerd-4560a4de8e00ea303a9bf1b236f81690b08360d8b22757d22b3b7c1eada6321a.scope.
Feb 12 22:00:21.687027 env[1642]: time="2024-02-12T22:00:21.686965041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8a3eca71-bd39-4dfb-af21-f6643100fddd,Namespace:default,Attempt:0,} returns sandbox id \"4560a4de8e00ea303a9bf1b236f81690b08360d8b22757d22b3b7c1eada6321a\""
Feb 12 22:00:21.689490 env[1642]: time="2024-02-12T22:00:21.689452074Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 12 22:00:22.219665 kubelet[2071]: E0212 22:00:22.219609    2071 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:22.271974 kubelet[2071]: E0212 22:00:22.271930    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:23.273245 kubelet[2071]: E0212 22:00:23.273129    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:23.273321 systemd-networkd[1456]: lxcaf0ac9fc176f: Gained IPv6LL
Feb 12 22:00:24.278281 kubelet[2071]: E0212 22:00:24.278237    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:25.278966 kubelet[2071]: E0212 22:00:25.278914    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:26.025157 env[1642]: time="2024-02-12T22:00:26.025105227Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 22:00:26.027953 env[1642]: time="2024-02-12T22:00:26.027909789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 22:00:26.030518 env[1642]: time="2024-02-12T22:00:26.030476243Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 22:00:26.033307 env[1642]: time="2024-02-12T22:00:26.033264791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 22:00:26.034299 env[1642]: time="2024-02-12T22:00:26.034258407Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\""
Feb 12 22:00:26.037254 env[1642]: time="2024-02-12T22:00:26.037218949Z" level=info msg="CreateContainer within sandbox \"4560a4de8e00ea303a9bf1b236f81690b08360d8b22757d22b3b7c1eada6321a\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Feb 12 22:00:26.054492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249085263.mount: Deactivated successfully.
Feb 12 22:00:26.071993 env[1642]: time="2024-02-12T22:00:26.071944420Z" level=info msg="CreateContainer within sandbox \"4560a4de8e00ea303a9bf1b236f81690b08360d8b22757d22b3b7c1eada6321a\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"cbb433324e9dd828e6ecd8524406da6d21b5734ca6d90b3b1c6d9383013e321a\""
Feb 12 22:00:26.072858 env[1642]: time="2024-02-12T22:00:26.072819672Z" level=info msg="StartContainer for \"cbb433324e9dd828e6ecd8524406da6d21b5734ca6d90b3b1c6d9383013e321a\""
Feb 12 22:00:26.106854 systemd[1]: Started cri-containerd-cbb433324e9dd828e6ecd8524406da6d21b5734ca6d90b3b1c6d9383013e321a.scope.
Feb 12 22:00:26.154138 env[1642]: time="2024-02-12T22:00:26.154086254Z" level=info msg="StartContainer for \"cbb433324e9dd828e6ecd8524406da6d21b5734ca6d90b3b1c6d9383013e321a\" returns successfully"
Feb 12 22:00:26.280119 kubelet[2071]: E0212 22:00:26.279991    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:26.614712 kubelet[2071]: I0212 22:00:26.614669    2071 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.268582447 podCreationTimestamp="2024-02-12 22:00:04 +0000 UTC" firstStartedPulling="2024-02-12 22:00:21.688591278 +0000 UTC m=+59.962397632" lastFinishedPulling="2024-02-12 22:00:26.034620769 +0000 UTC m=+64.308427139" observedRunningTime="2024-02-12 22:00:26.614589563 +0000 UTC m=+64.888395940" watchObservedRunningTime="2024-02-12 22:00:26.614611954 +0000 UTC m=+64.888418340"
Feb 12 22:00:27.280902 kubelet[2071]: E0212 22:00:27.280852    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:28.281550 kubelet[2071]: E0212 22:00:28.281497    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:28.611164 systemd[1]: run-containerd-runc-k8s.io-5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67-runc.7KHg8P.mount: Deactivated successfully.
Feb 12 22:00:28.642645 env[1642]: time="2024-02-12T22:00:28.642568612Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 12 22:00:28.649862 env[1642]: time="2024-02-12T22:00:28.649822070Z" level=info msg="StopContainer for \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\" with timeout 2 (s)"
Feb 12 22:00:28.650249 env[1642]: time="2024-02-12T22:00:28.650116112Z" level=info msg="Stop container \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\" with signal terminated"
Feb 12 22:00:28.657772 systemd-networkd[1456]: lxc_health: Link DOWN
Feb 12 22:00:28.657781 systemd-networkd[1456]: lxc_health: Lost carrier
Feb 12 22:00:28.780411 systemd[1]: cri-containerd-5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67.scope: Deactivated successfully.
Feb 12 22:00:28.780731 systemd[1]: cri-containerd-5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67.scope: Consumed 8.325s CPU time.
Feb 12 22:00:28.809832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67-rootfs.mount: Deactivated successfully.
Feb 12 22:00:28.836767 env[1642]: time="2024-02-12T22:00:28.836708353Z" level=info msg="shim disconnected" id=5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67
Feb 12 22:00:28.836767 env[1642]: time="2024-02-12T22:00:28.836768903Z" level=warning msg="cleaning up after shim disconnected" id=5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67 namespace=k8s.io
Feb 12 22:00:28.837119 env[1642]: time="2024-02-12T22:00:28.836782545Z" level=info msg="cleaning up dead shim"
Feb 12 22:00:28.847500 env[1642]: time="2024-02-12T22:00:28.847403867Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3733 runtime=io.containerd.runc.v2\n"
Feb 12 22:00:28.850643 env[1642]: time="2024-02-12T22:00:28.850600296Z" level=info msg="StopContainer for \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\" returns successfully"
Feb 12 22:00:28.851468 env[1642]: time="2024-02-12T22:00:28.851428032Z" level=info msg="StopPodSandbox for \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\""
Feb 12 22:00:28.851589 env[1642]: time="2024-02-12T22:00:28.851499364Z" level=info msg="Container to stop \"9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 22:00:28.851589 env[1642]: time="2024-02-12T22:00:28.851519664Z" level=info msg="Container to stop \"975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 22:00:28.851589 env[1642]: time="2024-02-12T22:00:28.851536457Z" level=info msg="Container to stop \"d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 22:00:28.851589 env[1642]: time="2024-02-12T22:00:28.851553767Z" level=info msg="Container to stop \"e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 22:00:28.851589 env[1642]: time="2024-02-12T22:00:28.851568986Z" level=info msg="Container to stop \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 22:00:28.854236 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e-shm.mount: Deactivated successfully.
Feb 12 22:00:28.859849 systemd[1]: cri-containerd-2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e.scope: Deactivated successfully.
Feb 12 22:00:28.896147 env[1642]: time="2024-02-12T22:00:28.896006853Z" level=info msg="shim disconnected" id=2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e
Feb 12 22:00:28.896147 env[1642]: time="2024-02-12T22:00:28.896060629Z" level=warning msg="cleaning up after shim disconnected" id=2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e namespace=k8s.io
Feb 12 22:00:28.896147 env[1642]: time="2024-02-12T22:00:28.896073481Z" level=info msg="cleaning up dead shim"
Feb 12 22:00:28.909671 env[1642]: time="2024-02-12T22:00:28.909613099Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3765 runtime=io.containerd.runc.v2\n"
Feb 12 22:00:28.910102 env[1642]: time="2024-02-12T22:00:28.910064544Z" level=info msg="TearDown network for sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" successfully"
Feb 12 22:00:28.910102 env[1642]: time="2024-02-12T22:00:28.910097854Z" level=info msg="StopPodSandbox for \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" returns successfully"
Feb 12 22:00:29.077273 kubelet[2071]: I0212 22:00:29.077226    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-cni-path\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.077481 kubelet[2071]: I0212 22:00:29.077293    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ca8068d-0c0c-446b-a378-9376700880de-cilium-config-path\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.077481 kubelet[2071]: I0212 22:00:29.077321    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ca8068d-0c0c-446b-a378-9376700880de-clustermesh-secrets\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.077481 kubelet[2071]: I0212 22:00:29.077344    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-lib-modules\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.077481 kubelet[2071]: I0212 22:00:29.077370    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-cilium-cgroup\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.077481 kubelet[2071]: I0212 22:00:29.077398    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-etc-cni-netd\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.077481 kubelet[2071]: I0212 22:00:29.077426    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ca8068d-0c0c-446b-a378-9376700880de-hubble-tls\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.077749 kubelet[2071]: I0212 22:00:29.077452    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-hostproc\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.077749 kubelet[2071]: I0212 22:00:29.077480    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-bpf-maps\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.077749 kubelet[2071]: I0212 22:00:29.077509    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-cilium-run\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.077749 kubelet[2071]: I0212 22:00:29.077540    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-host-proc-sys-kernel\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.077749 kubelet[2071]: I0212 22:00:29.077572    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw5q8\" (UniqueName: \"kubernetes.io/projected/5ca8068d-0c0c-446b-a378-9376700880de-kube-api-access-hw5q8\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.077749 kubelet[2071]: I0212 22:00:29.077602    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-host-proc-sys-net\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.078052 kubelet[2071]: I0212 22:00:29.077631    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-xtables-lock\") pod \"5ca8068d-0c0c-446b-a378-9376700880de\" (UID: \"5ca8068d-0c0c-446b-a378-9376700880de\") "
Feb 12 22:00:29.078052 kubelet[2071]: I0212 22:00:29.077703    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:29.078052 kubelet[2071]: I0212 22:00:29.077750    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-cni-path" (OuterVolumeSpecName: "cni-path") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:29.078296 kubelet[2071]: I0212 22:00:29.078251    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-hostproc" (OuterVolumeSpecName: "hostproc") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:29.079415 kubelet[2071]: I0212 22:00:29.078501    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:29.079415 kubelet[2071]: I0212 22:00:29.078523    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:29.079581 kubelet[2071]: I0212 22:00:29.078541    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:29.079581 kubelet[2071]: I0212 22:00:29.079077    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:29.079581 kubelet[2071]: I0212 22:00:29.079101    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:29.079581 kubelet[2071]: I0212 22:00:29.079125    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:29.079581 kubelet[2071]: I0212 22:00:29.079361    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:29.085678 kubelet[2071]: I0212 22:00:29.085629    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ca8068d-0c0c-446b-a378-9376700880de-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 12 22:00:29.087959 kubelet[2071]: I0212 22:00:29.087918    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca8068d-0c0c-446b-a378-9376700880de-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 12 22:00:29.091371 kubelet[2071]: I0212 22:00:29.091328    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca8068d-0c0c-446b-a378-9376700880de-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 12 22:00:29.092389 kubelet[2071]: I0212 22:00:29.092351    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca8068d-0c0c-446b-a378-9376700880de-kube-api-access-hw5q8" (OuterVolumeSpecName: "kube-api-access-hw5q8") pod "5ca8068d-0c0c-446b-a378-9376700880de" (UID: "5ca8068d-0c0c-446b-a378-9376700880de"). InnerVolumeSpecName "kube-api-access-hw5q8". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 12 22:00:29.178915 kubelet[2071]: I0212 22:00:29.178641    2071 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-host-proc-sys-net\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.178915 kubelet[2071]: I0212 22:00:29.178679    2071 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-xtables-lock\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.178915 kubelet[2071]: I0212 22:00:29.178696    2071 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-cni-path\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.178915 kubelet[2071]: I0212 22:00:29.178711    2071 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ca8068d-0c0c-446b-a378-9376700880de-cilium-config-path\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.178915 kubelet[2071]: I0212 22:00:29.178726    2071 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ca8068d-0c0c-446b-a378-9376700880de-clustermesh-secrets\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.178915 kubelet[2071]: I0212 22:00:29.178739    2071 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-lib-modules\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.178915 kubelet[2071]: I0212 22:00:29.178753    2071 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-cilium-cgroup\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.178915 kubelet[2071]: I0212 22:00:29.178767    2071 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-etc-cni-netd\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.179437 kubelet[2071]: I0212 22:00:29.178779    2071 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ca8068d-0c0c-446b-a378-9376700880de-hubble-tls\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.179437 kubelet[2071]: I0212 22:00:29.178792    2071 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-hostproc\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.179437 kubelet[2071]: I0212 22:00:29.178806    2071 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-bpf-maps\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.179437 kubelet[2071]: I0212 22:00:29.178820    2071 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-cilium-run\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.179437 kubelet[2071]: I0212 22:00:29.178835    2071 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ca8068d-0c0c-446b-a378-9376700880de-host-proc-sys-kernel\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.179437 kubelet[2071]: I0212 22:00:29.178848    2071 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hw5q8\" (UniqueName: \"kubernetes.io/projected/5ca8068d-0c0c-446b-a378-9376700880de-kube-api-access-hw5q8\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:29.282079 kubelet[2071]: E0212 22:00:29.282042    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:29.600375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e-rootfs.mount: Deactivated successfully.
Feb 12 22:00:29.600506 systemd[1]: var-lib-kubelet-pods-5ca8068d\x2d0c0c\x2d446b\x2da378\x2d9376700880de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhw5q8.mount: Deactivated successfully.
Feb 12 22:00:29.601389 systemd[1]: var-lib-kubelet-pods-5ca8068d\x2d0c0c\x2d446b\x2da378\x2d9376700880de-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb 12 22:00:29.601489 systemd[1]: var-lib-kubelet-pods-5ca8068d\x2d0c0c\x2d446b\x2da378\x2d9376700880de-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb 12 22:00:29.627724 kubelet[2071]: I0212 22:00:29.627697    2071 scope.go:117] "RemoveContainer" containerID="5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67"
Feb 12 22:00:29.632705 env[1642]: time="2024-02-12T22:00:29.632355330Z" level=info msg="RemoveContainer for \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\""
Feb 12 22:00:29.632691 systemd[1]: Removed slice kubepods-burstable-pod5ca8068d_0c0c_446b_a378_9376700880de.slice.
Feb 12 22:00:29.632973 systemd[1]: kubepods-burstable-pod5ca8068d_0c0c_446b_a378_9376700880de.slice: Consumed 8.444s CPU time.
Feb 12 22:00:29.637442 env[1642]: time="2024-02-12T22:00:29.637394362Z" level=info msg="RemoveContainer for \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\" returns successfully"
Feb 12 22:00:29.637759 kubelet[2071]: I0212 22:00:29.637732    2071 scope.go:117] "RemoveContainer" containerID="e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8"
Feb 12 22:00:29.639305 env[1642]: time="2024-02-12T22:00:29.639264296Z" level=info msg="RemoveContainer for \"e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8\""
Feb 12 22:00:29.643260 env[1642]: time="2024-02-12T22:00:29.643159569Z" level=info msg="RemoveContainer for \"e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8\" returns successfully"
Feb 12 22:00:29.643723 kubelet[2071]: I0212 22:00:29.643698    2071 scope.go:117] "RemoveContainer" containerID="d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc"
Feb 12 22:00:29.644942 env[1642]: time="2024-02-12T22:00:29.644905202Z" level=info msg="RemoveContainer for \"d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc\""
Feb 12 22:00:29.649769 env[1642]: time="2024-02-12T22:00:29.649722783Z" level=info msg="RemoveContainer for \"d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc\" returns successfully"
Feb 12 22:00:29.650335 kubelet[2071]: I0212 22:00:29.650307    2071 scope.go:117] "RemoveContainer" containerID="975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5"
Feb 12 22:00:29.651673 env[1642]: time="2024-02-12T22:00:29.651638370Z" level=info msg="RemoveContainer for \"975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5\""
Feb 12 22:00:29.655423 env[1642]: time="2024-02-12T22:00:29.655387355Z" level=info msg="RemoveContainer for \"975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5\" returns successfully"
Feb 12 22:00:29.655614 kubelet[2071]: I0212 22:00:29.655587    2071 scope.go:117] "RemoveContainer" containerID="9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c"
Feb 12 22:00:29.656976 env[1642]: time="2024-02-12T22:00:29.656944327Z" level=info msg="RemoveContainer for \"9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c\""
Feb 12 22:00:29.665422 env[1642]: time="2024-02-12T22:00:29.665369157Z" level=info msg="RemoveContainer for \"9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c\" returns successfully"
Feb 12 22:00:29.665664 kubelet[2071]: I0212 22:00:29.665637    2071 scope.go:117] "RemoveContainer" containerID="5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67"
Feb 12 22:00:29.666023 env[1642]: time="2024-02-12T22:00:29.665939511Z" level=error msg="ContainerStatus for \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\": not found"
Feb 12 22:00:29.666401 kubelet[2071]: E0212 22:00:29.666378    2071 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\": not found" containerID="5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67"
Feb 12 22:00:29.666616 kubelet[2071]: I0212 22:00:29.666566    2071 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67"} err="failed to get container status \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\": rpc error: code = NotFound desc = an error occurred when try to find container \"5380369b915ded03b5be80b18e4abad0f8145d55789a337bef0e55c8b5a51b67\": not found"
Feb 12 22:00:29.666731 kubelet[2071]: I0212 22:00:29.666621    2071 scope.go:117] "RemoveContainer" containerID="e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8"
Feb 12 22:00:29.667545 env[1642]: time="2024-02-12T22:00:29.667102338Z" level=error msg="ContainerStatus for \"e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8\": not found"
Feb 12 22:00:29.667730 kubelet[2071]: E0212 22:00:29.667711    2071 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8\": not found" containerID="e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8"
Feb 12 22:00:29.667816 kubelet[2071]: I0212 22:00:29.667753    2071 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8"} err="failed to get container status \"e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8650d58dda42bdad6f37bb34ff294109b420aa2f5a10c91291a8a333b2720f8\": not found"
Feb 12 22:00:29.667816 kubelet[2071]: I0212 22:00:29.667768    2071 scope.go:117] "RemoveContainer" containerID="d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc"
Feb 12 22:00:29.668130 env[1642]: time="2024-02-12T22:00:29.668068777Z" level=error msg="ContainerStatus for \"d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc\": not found"
Feb 12 22:00:29.668341 kubelet[2071]: E0212 22:00:29.668323    2071 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc\": not found" containerID="d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc"
Feb 12 22:00:29.668419 kubelet[2071]: I0212 22:00:29.668360    2071 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc"} err="failed to get container status \"d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc\": rpc error: code = NotFound desc = an error occurred when try to find container \"d50fa2a4f57650c039addda4a109db940f9917b3005c660b6a552577c05adffc\": not found"
Feb 12 22:00:29.668419 kubelet[2071]: I0212 22:00:29.668374    2071 scope.go:117] "RemoveContainer" containerID="975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5"
Feb 12 22:00:29.668707 env[1642]: time="2024-02-12T22:00:29.668646312Z" level=error msg="ContainerStatus for \"975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5\": not found"
Feb 12 22:00:29.668995 kubelet[2071]: E0212 22:00:29.668866    2071 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5\": not found" containerID="975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5"
Feb 12 22:00:29.669120 kubelet[2071]: I0212 22:00:29.669101    2071 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5"} err="failed to get container status \"975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"975a8c7e1f105e97fc7af250a1ed2ce888274126bd3f493e6af87bcdf36ef6e5\": not found"
Feb 12 22:00:29.669260 kubelet[2071]: I0212 22:00:29.669122    2071 scope.go:117] "RemoveContainer" containerID="9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c"
Feb 12 22:00:29.669478 env[1642]: time="2024-02-12T22:00:29.669417974Z" level=error msg="ContainerStatus for \"9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c\": not found"
Feb 12 22:00:29.669757 kubelet[2071]: E0212 22:00:29.669741    2071 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c\": not found" containerID="9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c"
Feb 12 22:00:29.669821 kubelet[2071]: I0212 22:00:29.669779    2071 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c"} err="failed to get container status \"9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f49614269c1bda559f5c3f902ae39f6000f9cfdaf306e2b28864f3f7f38c80c\": not found"
Feb 12 22:00:30.282617 kubelet[2071]: E0212 22:00:30.282560    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:30.398244 kubelet[2071]: I0212 22:00:30.398206    2071 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5ca8068d-0c0c-446b-a378-9376700880de" path="/var/lib/kubelet/pods/5ca8068d-0c0c-446b-a378-9376700880de/volumes"
Feb 12 22:00:31.283111 kubelet[2071]: E0212 22:00:31.283070    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:31.649894 kubelet[2071]: I0212 22:00:31.649841    2071 topology_manager.go:215] "Topology Admit Handler" podUID="60b0d0e4-d9da-4d50-bfe9-75ff5f890e67" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-qbp6d"
Feb 12 22:00:31.650060 kubelet[2071]: E0212 22:00:31.649917    2071 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ca8068d-0c0c-446b-a378-9376700880de" containerName="mount-cgroup"
Feb 12 22:00:31.650060 kubelet[2071]: E0212 22:00:31.649932    2071 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ca8068d-0c0c-446b-a378-9376700880de" containerName="clean-cilium-state"
Feb 12 22:00:31.650060 kubelet[2071]: E0212 22:00:31.649942    2071 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ca8068d-0c0c-446b-a378-9376700880de" containerName="cilium-agent"
Feb 12 22:00:31.650060 kubelet[2071]: E0212 22:00:31.649951    2071 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ca8068d-0c0c-446b-a378-9376700880de" containerName="apply-sysctl-overwrites"
Feb 12 22:00:31.650060 kubelet[2071]: E0212 22:00:31.649960    2071 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ca8068d-0c0c-446b-a378-9376700880de" containerName="mount-bpf-fs"
Feb 12 22:00:31.650060 kubelet[2071]: I0212 22:00:31.649984    2071 memory_manager.go:346] "RemoveStaleState removing state" podUID="5ca8068d-0c0c-446b-a378-9376700880de" containerName="cilium-agent"
Feb 12 22:00:31.659492 systemd[1]: Created slice kubepods-besteffort-pod60b0d0e4_d9da_4d50_bfe9_75ff5f890e67.slice.
Feb 12 22:00:31.673422 kubelet[2071]: W0212 22:00:31.673388    2071 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.17.60" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.60' and this object
Feb 12 22:00:31.673422 kubelet[2071]: E0212 22:00:31.673431    2071 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.17.60" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.60' and this object
Feb 12 22:00:31.707672 kubelet[2071]: I0212 22:00:31.707638    2071 topology_manager.go:215] "Topology Admit Handler" podUID="f7bdca81-a174-4eed-8551-07f50a1c7aeb" podNamespace="kube-system" podName="cilium-tm6g4"
Feb 12 22:00:31.715552 systemd[1]: Created slice kubepods-burstable-podf7bdca81_a174_4eed_8551_07f50a1c7aeb.slice.
Feb 12 22:00:31.795360 kubelet[2071]: I0212 22:00:31.795320    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w99r\" (UniqueName: \"kubernetes.io/projected/60b0d0e4-d9da-4d50-bfe9-75ff5f890e67-kube-api-access-2w99r\") pod \"cilium-operator-6bc8ccdb58-qbp6d\" (UID: \"60b0d0e4-d9da-4d50-bfe9-75ff5f890e67\") " pod="kube-system/cilium-operator-6bc8ccdb58-qbp6d"
Feb 12 22:00:31.795632 kubelet[2071]: I0212 22:00:31.795385    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60b0d0e4-d9da-4d50-bfe9-75ff5f890e67-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-qbp6d\" (UID: \"60b0d0e4-d9da-4d50-bfe9-75ff5f890e67\") " pod="kube-system/cilium-operator-6bc8ccdb58-qbp6d"
Feb 12 22:00:31.895814 kubelet[2071]: I0212 22:00:31.895715    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-xtables-lock\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896019 kubelet[2071]: I0212 22:00:31.895913    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-ipsec-secrets\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896019 kubelet[2071]: I0212 22:00:31.895963    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-run\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896148 kubelet[2071]: I0212 22:00:31.896024    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-etc-cni-netd\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896148 kubelet[2071]: I0212 22:00:31.896098    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cni-path\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896237 kubelet[2071]: I0212 22:00:31.896131    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7bdca81-a174-4eed-8551-07f50a1c7aeb-clustermesh-secrets\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896237 kubelet[2071]: I0212 22:00:31.896229    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-config-path\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896331 kubelet[2071]: I0212 22:00:31.896318    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-bpf-maps\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896425 kubelet[2071]: I0212 22:00:31.896413    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9c7l\" (UniqueName: \"kubernetes.io/projected/f7bdca81-a174-4eed-8551-07f50a1c7aeb-kube-api-access-t9c7l\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896565 kubelet[2071]: I0212 22:00:31.896541    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-host-proc-sys-kernel\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896647 kubelet[2071]: I0212 22:00:31.896581    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7bdca81-a174-4eed-8551-07f50a1c7aeb-hubble-tls\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896647 kubelet[2071]: I0212 22:00:31.896611    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-cgroup\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896647 kubelet[2071]: I0212 22:00:31.896642    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-lib-modules\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896783 kubelet[2071]: I0212 22:00:31.896679    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-host-proc-sys-net\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:31.896783 kubelet[2071]: I0212 22:00:31.896737    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-hostproc\") pod \"cilium-tm6g4\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") " pod="kube-system/cilium-tm6g4"
Feb 12 22:00:32.283222 kubelet[2071]: E0212 22:00:32.283179    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:32.344265 kubelet[2071]: E0212 22:00:32.344238    2071 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 12 22:00:32.345149 kubelet[2071]: E0212 22:00:32.345122    2071 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-tm6g4" podUID="f7bdca81-a174-4eed-8551-07f50a1c7aeb"
Feb 12 22:00:32.803133 kubelet[2071]: I0212 22:00:32.803083    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-ipsec-secrets\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803133 kubelet[2071]: I0212 22:00:32.803139    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7bdca81-a174-4eed-8551-07f50a1c7aeb-clustermesh-secrets\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803373 kubelet[2071]: I0212 22:00:32.803168    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-bpf-maps\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803373 kubelet[2071]: I0212 22:00:32.803199    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9c7l\" (UniqueName: \"kubernetes.io/projected/f7bdca81-a174-4eed-8551-07f50a1c7aeb-kube-api-access-t9c7l\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803373 kubelet[2071]: I0212 22:00:32.803225    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7bdca81-a174-4eed-8551-07f50a1c7aeb-hubble-tls\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803373 kubelet[2071]: I0212 22:00:32.803248    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-cgroup\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803373 kubelet[2071]: I0212 22:00:32.803273    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-etc-cni-netd\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803373 kubelet[2071]: I0212 22:00:32.803298    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cni-path\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803626 kubelet[2071]: I0212 22:00:32.803322    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-hostproc\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803626 kubelet[2071]: I0212 22:00:32.803348    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-xtables-lock\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803626 kubelet[2071]: I0212 22:00:32.803375    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-run\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803626 kubelet[2071]: I0212 22:00:32.803405    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-host-proc-sys-net\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803626 kubelet[2071]: I0212 22:00:32.803434    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-host-proc-sys-kernel\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803626 kubelet[2071]: I0212 22:00:32.803461    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-lib-modules\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.803897 kubelet[2071]: I0212 22:00:32.803537    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:32.804011 kubelet[2071]: I0212 22:00:32.803988    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:32.807601 kubelet[2071]: I0212 22:00:32.807568    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7bdca81-a174-4eed-8551-07f50a1c7aeb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 12 22:00:32.807758 kubelet[2071]: I0212 22:00:32.807581    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 12 22:00:32.807848 kubelet[2071]: I0212 22:00:32.807616    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cni-path" (OuterVolumeSpecName: "cni-path") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:32.807962 kubelet[2071]: I0212 22:00:32.807634    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-hostproc" (OuterVolumeSpecName: "hostproc") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:32.808041 kubelet[2071]: I0212 22:00:32.807650    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:32.808108 kubelet[2071]: I0212 22:00:32.807667    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:32.808176 kubelet[2071]: I0212 22:00:32.807690    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:32.808241 kubelet[2071]: I0212 22:00:32.807707    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:32.811505 kubelet[2071]: I0212 22:00:32.811466    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7bdca81-a174-4eed-8551-07f50a1c7aeb-kube-api-access-t9c7l" (OuterVolumeSpecName: "kube-api-access-t9c7l") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "kube-api-access-t9c7l". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 12 22:00:32.811641 kubelet[2071]: I0212 22:00:32.811526    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:32.811641 kubelet[2071]: I0212 22:00:32.811552    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:32.811641 kubelet[2071]: I0212 22:00:32.811588    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7bdca81-a174-4eed-8551-07f50a1c7aeb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 12 22:00:32.864892 env[1642]: time="2024-02-12T22:00:32.864820363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-qbp6d,Uid:60b0d0e4-d9da-4d50-bfe9-75ff5f890e67,Namespace:kube-system,Attempt:0,}"
Feb 12 22:00:32.882364 env[1642]: time="2024-02-12T22:00:32.882279593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 22:00:32.882364 env[1642]: time="2024-02-12T22:00:32.882332989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 22:00:32.882632 env[1642]: time="2024-02-12T22:00:32.882348642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 22:00:32.883129 env[1642]: time="2024-02-12T22:00:32.883071167Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8e5227d3d4a1b16e4a6b72ecaa0602bacce6d09f042ae6cd35319d955472ad5 pid=3797 runtime=io.containerd.runc.v2
Feb 12 22:00:32.896622 systemd[1]: Started cri-containerd-a8e5227d3d4a1b16e4a6b72ecaa0602bacce6d09f042ae6cd35319d955472ad5.scope.
Feb 12 22:00:32.906311 kubelet[2071]: I0212 22:00:32.904128    2071 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-config-path\") pod \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\" (UID: \"f7bdca81-a174-4eed-8551-07f50a1c7aeb\") "
Feb 12 22:00:32.906311 kubelet[2071]: I0212 22:00:32.904198    2071 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-etc-cni-netd\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.906311 kubelet[2071]: I0212 22:00:32.904216    2071 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cni-path\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.906311 kubelet[2071]: I0212 22:00:32.904231    2071 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-hostproc\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.906311 kubelet[2071]: I0212 22:00:32.904246    2071 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7bdca81-a174-4eed-8551-07f50a1c7aeb-hubble-tls\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.906311 kubelet[2071]: I0212 22:00:32.904262    2071 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-cgroup\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.906311 kubelet[2071]: I0212 22:00:32.904277    2071 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-run\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.906311 kubelet[2071]: I0212 22:00:32.904295    2071 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-host-proc-sys-net\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.906726 kubelet[2071]: I0212 22:00:32.904316    2071 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-xtables-lock\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.906726 kubelet[2071]: I0212 22:00:32.904611    2071 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-host-proc-sys-kernel\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.906726 kubelet[2071]: I0212 22:00:32.904641    2071 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-lib-modules\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.906726 kubelet[2071]: I0212 22:00:32.904660    2071 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7bdca81-a174-4eed-8551-07f50a1c7aeb-clustermesh-secrets\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.906726 kubelet[2071]: I0212 22:00:32.904676    2071 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7bdca81-a174-4eed-8551-07f50a1c7aeb-bpf-maps\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.906726 kubelet[2071]: I0212 22:00:32.904692    2071 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-t9c7l\" (UniqueName: \"kubernetes.io/projected/f7bdca81-a174-4eed-8551-07f50a1c7aeb-kube-api-access-t9c7l\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.906726 kubelet[2071]: I0212 22:00:32.904712    2071 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-ipsec-secrets\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:32.913907 kubelet[2071]: I0212 22:00:32.907740    2071 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f7bdca81-a174-4eed-8551-07f50a1c7aeb" (UID: "f7bdca81-a174-4eed-8551-07f50a1c7aeb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 12 22:00:32.915031 systemd[1]: var-lib-kubelet-pods-f7bdca81\x2da174\x2d4eed\x2d8551\x2d07f50a1c7aeb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt9c7l.mount: Deactivated successfully.
Feb 12 22:00:32.915143 systemd[1]: var-lib-kubelet-pods-f7bdca81\x2da174\x2d4eed\x2d8551\x2d07f50a1c7aeb-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Feb 12 22:00:32.915222 systemd[1]: var-lib-kubelet-pods-f7bdca81\x2da174\x2d4eed\x2d8551\x2d07f50a1c7aeb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb 12 22:00:32.915305 systemd[1]: var-lib-kubelet-pods-f7bdca81\x2da174\x2d4eed\x2d8551\x2d07f50a1c7aeb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb 12 22:00:32.961835 env[1642]: time="2024-02-12T22:00:32.960799620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-qbp6d,Uid:60b0d0e4-d9da-4d50-bfe9-75ff5f890e67,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8e5227d3d4a1b16e4a6b72ecaa0602bacce6d09f042ae6cd35319d955472ad5\""
Feb 12 22:00:32.963209 env[1642]: time="2024-02-12T22:00:32.963162033Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb 12 22:00:33.005704 kubelet[2071]: I0212 22:00:33.005661    2071 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7bdca81-a174-4eed-8551-07f50a1c7aeb-cilium-config-path\") on node \"172.31.17.60\" DevicePath \"\""
Feb 12 22:00:33.284165 kubelet[2071]: E0212 22:00:33.284116    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:33.641672 systemd[1]: Removed slice kubepods-burstable-podf7bdca81_a174_4eed_8551_07f50a1c7aeb.slice.
Feb 12 22:00:33.692207 kubelet[2071]: I0212 22:00:33.692170    2071 topology_manager.go:215] "Topology Admit Handler" podUID="8a0ee616-c51b-417a-9fd9-2a4adc8ad119" podNamespace="kube-system" podName="cilium-82zms"
Feb 12 22:00:33.698824 systemd[1]: Created slice kubepods-burstable-pod8a0ee616_c51b_417a_9fd9_2a4adc8ad119.slice.
Feb 12 22:00:33.810661 kubelet[2071]: I0212 22:00:33.810629    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-cilium-ipsec-secrets\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.810810 kubelet[2071]: I0212 22:00:33.810682    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-cni-path\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.810810 kubelet[2071]: I0212 22:00:33.810711    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-etc-cni-netd\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.810810 kubelet[2071]: I0212 22:00:33.810740    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-lib-modules\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.810810 kubelet[2071]: I0212 22:00:33.810767    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-xtables-lock\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.810810 kubelet[2071]: I0212 22:00:33.810809    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-cilium-config-path\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.811098 kubelet[2071]: I0212 22:00:33.810838    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-hubble-tls\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.811098 kubelet[2071]: I0212 22:00:33.810867    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-host-proc-sys-kernel\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.811098 kubelet[2071]: I0212 22:00:33.810920    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnwkv\" (UniqueName: \"kubernetes.io/projected/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-kube-api-access-lnwkv\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.811098 kubelet[2071]: I0212 22:00:33.810949    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-cilium-run\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.811098 kubelet[2071]: I0212 22:00:33.810981    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-cilium-cgroup\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.811098 kubelet[2071]: I0212 22:00:33.811022    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-host-proc-sys-net\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.811441 kubelet[2071]: I0212 22:00:33.811056    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-bpf-maps\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.811441 kubelet[2071]: I0212 22:00:33.811088    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-hostproc\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:33.811441 kubelet[2071]: I0212 22:00:33.811118    2071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a0ee616-c51b-417a-9fd9-2a4adc8ad119-clustermesh-secrets\") pod \"cilium-82zms\" (UID: \"8a0ee616-c51b-417a-9fd9-2a4adc8ad119\") " pod="kube-system/cilium-82zms"
Feb 12 22:00:34.008925 env[1642]: time="2024-02-12T22:00:34.008312160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82zms,Uid:8a0ee616-c51b-417a-9fd9-2a4adc8ad119,Namespace:kube-system,Attempt:0,}"
Feb 12 22:00:34.028484 env[1642]: time="2024-02-12T22:00:34.027379248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 22:00:34.028484 env[1642]: time="2024-02-12T22:00:34.027422074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 22:00:34.028484 env[1642]: time="2024-02-12T22:00:34.027439210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 22:00:34.028484 env[1642]: time="2024-02-12T22:00:34.027701934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c3139c14136b994ab7eb288c68b9392fcf95a90722ef0944b35ac2b2a21fb0c pid=3842 runtime=io.containerd.runc.v2
Feb 12 22:00:34.044788 systemd[1]: Started cri-containerd-3c3139c14136b994ab7eb288c68b9392fcf95a90722ef0944b35ac2b2a21fb0c.scope.
Feb 12 22:00:34.075305 env[1642]: time="2024-02-12T22:00:34.075262806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82zms,Uid:8a0ee616-c51b-417a-9fd9-2a4adc8ad119,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c3139c14136b994ab7eb288c68b9392fcf95a90722ef0944b35ac2b2a21fb0c\""
Feb 12 22:00:34.078198 env[1642]: time="2024-02-12T22:00:34.078160144Z" level=info msg="CreateContainer within sandbox \"3c3139c14136b994ab7eb288c68b9392fcf95a90722ef0944b35ac2b2a21fb0c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 12 22:00:34.095138 env[1642]: time="2024-02-12T22:00:34.095088240Z" level=info msg="CreateContainer within sandbox \"3c3139c14136b994ab7eb288c68b9392fcf95a90722ef0944b35ac2b2a21fb0c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"863a53a2f91f4e8cffbd39f825d2f69a38cb924925ab4f2db086a5058f873ce3\""
Feb 12 22:00:34.095661 env[1642]: time="2024-02-12T22:00:34.095619613Z" level=info msg="StartContainer for \"863a53a2f91f4e8cffbd39f825d2f69a38cb924925ab4f2db086a5058f873ce3\""
Feb 12 22:00:34.117336 systemd[1]: Started cri-containerd-863a53a2f91f4e8cffbd39f825d2f69a38cb924925ab4f2db086a5058f873ce3.scope.
Feb 12 22:00:34.187009 env[1642]: time="2024-02-12T22:00:34.180857120Z" level=info msg="StartContainer for \"863a53a2f91f4e8cffbd39f825d2f69a38cb924925ab4f2db086a5058f873ce3\" returns successfully"
Feb 12 22:00:34.194834 systemd[1]: cri-containerd-863a53a2f91f4e8cffbd39f825d2f69a38cb924925ab4f2db086a5058f873ce3.scope: Deactivated successfully.
Feb 12 22:00:34.273681 env[1642]: time="2024-02-12T22:00:34.272902679Z" level=info msg="shim disconnected" id=863a53a2f91f4e8cffbd39f825d2f69a38cb924925ab4f2db086a5058f873ce3
Feb 12 22:00:34.274030 env[1642]: time="2024-02-12T22:00:34.274005241Z" level=warning msg="cleaning up after shim disconnected" id=863a53a2f91f4e8cffbd39f825d2f69a38cb924925ab4f2db086a5058f873ce3 namespace=k8s.io
Feb 12 22:00:34.274201 env[1642]: time="2024-02-12T22:00:34.274182978Z" level=info msg="cleaning up dead shim"
Feb 12 22:00:34.284944 kubelet[2071]: E0212 22:00:34.284904    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:34.294333 env[1642]: time="2024-02-12T22:00:34.294282238Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3927 runtime=io.containerd.runc.v2\n"
Feb 12 22:00:34.398946 kubelet[2071]: I0212 22:00:34.398912    2071 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f7bdca81-a174-4eed-8551-07f50a1c7aeb" path="/var/lib/kubelet/pods/f7bdca81-a174-4eed-8551-07f50a1c7aeb/volumes"
Feb 12 22:00:34.644530 env[1642]: time="2024-02-12T22:00:34.644481253Z" level=info msg="CreateContainer within sandbox \"3c3139c14136b994ab7eb288c68b9392fcf95a90722ef0944b35ac2b2a21fb0c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 12 22:00:34.684672 env[1642]: time="2024-02-12T22:00:34.684624401Z" level=info msg="CreateContainer within sandbox \"3c3139c14136b994ab7eb288c68b9392fcf95a90722ef0944b35ac2b2a21fb0c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3e09b1a9fbf12b30317bd12f9591c66b03e6d6f0b419f828340ad86d4ac0c40e\""
Feb 12 22:00:34.685446 env[1642]: time="2024-02-12T22:00:34.685419942Z" level=info msg="StartContainer for \"3e09b1a9fbf12b30317bd12f9591c66b03e6d6f0b419f828340ad86d4ac0c40e\""
Feb 12 22:00:34.736595 systemd[1]: Started cri-containerd-3e09b1a9fbf12b30317bd12f9591c66b03e6d6f0b419f828340ad86d4ac0c40e.scope.
Feb 12 22:00:34.795552 env[1642]: time="2024-02-12T22:00:34.795499912Z" level=info msg="StartContainer for \"3e09b1a9fbf12b30317bd12f9591c66b03e6d6f0b419f828340ad86d4ac0c40e\" returns successfully"
Feb 12 22:00:34.805164 systemd[1]: cri-containerd-3e09b1a9fbf12b30317bd12f9591c66b03e6d6f0b419f828340ad86d4ac0c40e.scope: Deactivated successfully.
Feb 12 22:00:34.914701 kubelet[2071]: I0212 22:00:34.913511    2071 setters.go:552] "Node became not ready" node="172.31.17.60" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-12T22:00:34Z","lastTransitionTime":"2024-02-12T22:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Feb 12 22:00:34.924334 env[1642]: time="2024-02-12T22:00:34.924276841Z" level=info msg="shim disconnected" id=3e09b1a9fbf12b30317bd12f9591c66b03e6d6f0b419f828340ad86d4ac0c40e
Feb 12 22:00:34.924505 env[1642]: time="2024-02-12T22:00:34.924340040Z" level=warning msg="cleaning up after shim disconnected" id=3e09b1a9fbf12b30317bd12f9591c66b03e6d6f0b419f828340ad86d4ac0c40e namespace=k8s.io
Feb 12 22:00:34.924505 env[1642]: time="2024-02-12T22:00:34.924352581Z" level=info msg="cleaning up dead shim"
Feb 12 22:00:34.945987 env[1642]: time="2024-02-12T22:00:34.945939754Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3993 runtime=io.containerd.runc.v2\n"
Feb 12 22:00:35.285254 kubelet[2071]: E0212 22:00:35.285091    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:35.328127 env[1642]: time="2024-02-12T22:00:35.328071130Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 22:00:35.330750 env[1642]: time="2024-02-12T22:00:35.330703105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 22:00:35.332991 env[1642]: time="2024-02-12T22:00:35.332942391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 22:00:35.333607 env[1642]: time="2024-02-12T22:00:35.333571616Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Feb 12 22:00:35.336855 env[1642]: time="2024-02-12T22:00:35.336822770Z" level=info msg="CreateContainer within sandbox \"a8e5227d3d4a1b16e4a6b72ecaa0602bacce6d09f042ae6cd35319d955472ad5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb 12 22:00:35.359125 env[1642]: time="2024-02-12T22:00:35.359057875Z" level=info msg="CreateContainer within sandbox \"a8e5227d3d4a1b16e4a6b72ecaa0602bacce6d09f042ae6cd35319d955472ad5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ea876eb0fa05a3b59b75dd6cf4d0be8e51946e395e9a889d9c20d1708c6ae5a9\""
Feb 12 22:00:35.359802 env[1642]: time="2024-02-12T22:00:35.359762988Z" level=info msg="StartContainer for \"ea876eb0fa05a3b59b75dd6cf4d0be8e51946e395e9a889d9c20d1708c6ae5a9\""
Feb 12 22:00:35.394077 systemd[1]: Started cri-containerd-ea876eb0fa05a3b59b75dd6cf4d0be8e51946e395e9a889d9c20d1708c6ae5a9.scope.
Feb 12 22:00:35.428517 env[1642]: time="2024-02-12T22:00:35.428467322Z" level=info msg="StartContainer for \"ea876eb0fa05a3b59b75dd6cf4d0be8e51946e395e9a889d9c20d1708c6ae5a9\" returns successfully"
Feb 12 22:00:35.660461 env[1642]: time="2024-02-12T22:00:35.660357481Z" level=info msg="CreateContainer within sandbox \"3c3139c14136b994ab7eb288c68b9392fcf95a90722ef0944b35ac2b2a21fb0c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 12 22:00:35.681585 env[1642]: time="2024-02-12T22:00:35.681533713Z" level=info msg="CreateContainer within sandbox \"3c3139c14136b994ab7eb288c68b9392fcf95a90722ef0944b35ac2b2a21fb0c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b7d3e9fe34e6007d167114f9280300b195e7dbb377e462575fd21c9bf291c5b0\""
Feb 12 22:00:35.682254 env[1642]: time="2024-02-12T22:00:35.682173848Z" level=info msg="StartContainer for \"b7d3e9fe34e6007d167114f9280300b195e7dbb377e462575fd21c9bf291c5b0\""
Feb 12 22:00:35.702528 systemd[1]: Started cri-containerd-b7d3e9fe34e6007d167114f9280300b195e7dbb377e462575fd21c9bf291c5b0.scope.
Feb 12 22:00:35.706314 kubelet[2071]: I0212 22:00:35.705208    2071 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-qbp6d" podStartSLOduration=2.333819979 podCreationTimestamp="2024-02-12 22:00:31 +0000 UTC" firstStartedPulling="2024-02-12 22:00:32.962534981 +0000 UTC m=+71.236341352" lastFinishedPulling="2024-02-12 22:00:35.333851724 +0000 UTC m=+73.607658090" observedRunningTime="2024-02-12 22:00:35.667918282 +0000 UTC m=+73.941724657" watchObservedRunningTime="2024-02-12 22:00:35.705136717 +0000 UTC m=+73.978943094"
Feb 12 22:00:35.749762 env[1642]: time="2024-02-12T22:00:35.749720778Z" level=info msg="StartContainer for \"b7d3e9fe34e6007d167114f9280300b195e7dbb377e462575fd21c9bf291c5b0\" returns successfully"
Feb 12 22:00:35.758009 systemd[1]: cri-containerd-b7d3e9fe34e6007d167114f9280300b195e7dbb377e462575fd21c9bf291c5b0.scope: Deactivated successfully.
Feb 12 22:00:35.793101 env[1642]: time="2024-02-12T22:00:35.793039584Z" level=info msg="shim disconnected" id=b7d3e9fe34e6007d167114f9280300b195e7dbb377e462575fd21c9bf291c5b0
Feb 12 22:00:35.793101 env[1642]: time="2024-02-12T22:00:35.793088332Z" level=warning msg="cleaning up after shim disconnected" id=b7d3e9fe34e6007d167114f9280300b195e7dbb377e462575fd21c9bf291c5b0 namespace=k8s.io
Feb 12 22:00:35.793101 env[1642]: time="2024-02-12T22:00:35.793102896Z" level=info msg="cleaning up dead shim"
Feb 12 22:00:35.802488 env[1642]: time="2024-02-12T22:00:35.802439552Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4096 runtime=io.containerd.runc.v2\n"
Feb 12 22:00:36.285725 kubelet[2071]: E0212 22:00:36.285615    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:36.664083 env[1642]: time="2024-02-12T22:00:36.664032355Z" level=info msg="CreateContainer within sandbox \"3c3139c14136b994ab7eb288c68b9392fcf95a90722ef0944b35ac2b2a21fb0c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 12 22:00:36.680788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1107335558.mount: Deactivated successfully.
Feb 12 22:00:36.690353 env[1642]: time="2024-02-12T22:00:36.690304546Z" level=info msg="CreateContainer within sandbox \"3c3139c14136b994ab7eb288c68b9392fcf95a90722ef0944b35ac2b2a21fb0c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"225d5557d788ccbb3b9d912ce391ef301d99d51ad70dd829fca7f6f9e8d379b7\""
Feb 12 22:00:36.691015 env[1642]: time="2024-02-12T22:00:36.690983926Z" level=info msg="StartContainer for \"225d5557d788ccbb3b9d912ce391ef301d99d51ad70dd829fca7f6f9e8d379b7\""
Feb 12 22:00:36.716131 systemd[1]: Started cri-containerd-225d5557d788ccbb3b9d912ce391ef301d99d51ad70dd829fca7f6f9e8d379b7.scope.
Feb 12 22:00:36.748669 systemd[1]: cri-containerd-225d5557d788ccbb3b9d912ce391ef301d99d51ad70dd829fca7f6f9e8d379b7.scope: Deactivated successfully.
Feb 12 22:00:36.751738 env[1642]: time="2024-02-12T22:00:36.751691600Z" level=info msg="StartContainer for \"225d5557d788ccbb3b9d912ce391ef301d99d51ad70dd829fca7f6f9e8d379b7\" returns successfully"
Feb 12 22:00:36.794045 env[1642]: time="2024-02-12T22:00:36.793987548Z" level=info msg="shim disconnected" id=225d5557d788ccbb3b9d912ce391ef301d99d51ad70dd829fca7f6f9e8d379b7
Feb 12 22:00:36.794045 env[1642]: time="2024-02-12T22:00:36.794042621Z" level=warning msg="cleaning up after shim disconnected" id=225d5557d788ccbb3b9d912ce391ef301d99d51ad70dd829fca7f6f9e8d379b7 namespace=k8s.io
Feb 12 22:00:36.794439 env[1642]: time="2024-02-12T22:00:36.794054377Z" level=info msg="cleaning up dead shim"
Feb 12 22:00:36.804617 env[1642]: time="2024-02-12T22:00:36.804563104Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4151 runtime=io.containerd.runc.v2\n"
Feb 12 22:00:36.918714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-225d5557d788ccbb3b9d912ce391ef301d99d51ad70dd829fca7f6f9e8d379b7-rootfs.mount: Deactivated successfully.
Feb 12 22:00:37.286673 kubelet[2071]: E0212 22:00:37.286540    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:37.345398 kubelet[2071]: E0212 22:00:37.345355    2071 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 12 22:00:37.670930 env[1642]: time="2024-02-12T22:00:37.670737236Z" level=info msg="CreateContainer within sandbox \"3c3139c14136b994ab7eb288c68b9392fcf95a90722ef0944b35ac2b2a21fb0c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 12 22:00:37.692357 env[1642]: time="2024-02-12T22:00:37.692307581Z" level=info msg="CreateContainer within sandbox \"3c3139c14136b994ab7eb288c68b9392fcf95a90722ef0944b35ac2b2a21fb0c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9a84c8d5de7c0f99cc41b0ed8c8de618c79bd21e007b5fd5a98387779a7c4213\""
Feb 12 22:00:37.693041 env[1642]: time="2024-02-12T22:00:37.693006595Z" level=info msg="StartContainer for \"9a84c8d5de7c0f99cc41b0ed8c8de618c79bd21e007b5fd5a98387779a7c4213\""
Feb 12 22:00:37.730739 systemd[1]: Started cri-containerd-9a84c8d5de7c0f99cc41b0ed8c8de618c79bd21e007b5fd5a98387779a7c4213.scope.
Feb 12 22:00:37.777358 env[1642]: time="2024-02-12T22:00:37.777307504Z" level=info msg="StartContainer for \"9a84c8d5de7c0f99cc41b0ed8c8de618c79bd21e007b5fd5a98387779a7c4213\" returns successfully"
Feb 12 22:00:37.919928 systemd[1]: run-containerd-runc-k8s.io-9a84c8d5de7c0f99cc41b0ed8c8de618c79bd21e007b5fd5a98387779a7c4213-runc.2SvVdY.mount: Deactivated successfully.
Feb 12 22:00:38.289101 kubelet[2071]: E0212 22:00:38.289013    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:38.474903 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Feb 12 22:00:38.691649 kubelet[2071]: I0212 22:00:38.691607    2071 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-82zms" podStartSLOduration=5.691560837 podCreationTimestamp="2024-02-12 22:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 22:00:38.69148552 +0000 UTC m=+76.965291898" watchObservedRunningTime="2024-02-12 22:00:38.691560837 +0000 UTC m=+76.965367213"
Feb 12 22:00:39.056414 systemd[1]: run-containerd-runc-k8s.io-9a84c8d5de7c0f99cc41b0ed8c8de618c79bd21e007b5fd5a98387779a7c4213-runc.SgU6Bw.mount: Deactivated successfully.
Feb 12 22:00:39.289942 kubelet[2071]: E0212 22:00:39.289888    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:40.290959 kubelet[2071]: E0212 22:00:40.290912    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:41.244967 systemd[1]: run-containerd-runc-k8s.io-9a84c8d5de7c0f99cc41b0ed8c8de618c79bd21e007b5fd5a98387779a7c4213-runc.jCpAdP.mount: Deactivated successfully.
Feb 12 22:00:41.291995 kubelet[2071]: E0212 22:00:41.291958    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:41.436831 (udev-worker)[4730]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 22:00:41.437544 (udev-worker)[4245]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 22:00:41.474058 systemd-networkd[1456]: lxc_health: Link UP
Feb 12 22:00:41.485040 systemd-networkd[1456]: lxc_health: Gained carrier
Feb 12 22:00:41.485901 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb 12 22:00:42.219773 kubelet[2071]: E0212 22:00:42.219725    2071 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:42.292345 kubelet[2071]: E0212 22:00:42.292298    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:43.049037 systemd-networkd[1456]: lxc_health: Gained IPv6LL
Feb 12 22:00:43.292791 kubelet[2071]: E0212 22:00:43.292745    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:43.486978 systemd[1]: run-containerd-runc-k8s.io-9a84c8d5de7c0f99cc41b0ed8c8de618c79bd21e007b5fd5a98387779a7c4213-runc.D5MeLK.mount: Deactivated successfully.
Feb 12 22:00:44.293059 kubelet[2071]: E0212 22:00:44.293015    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:45.294510 kubelet[2071]: E0212 22:00:45.294464    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:45.727121 systemd[1]: run-containerd-runc-k8s.io-9a84c8d5de7c0f99cc41b0ed8c8de618c79bd21e007b5fd5a98387779a7c4213-runc.3H6dkw.mount: Deactivated successfully.
Feb 12 22:00:46.295549 kubelet[2071]: E0212 22:00:46.295503    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:47.296083 kubelet[2071]: E0212 22:00:47.296042    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:48.020678 systemd[1]: run-containerd-runc-k8s.io-9a84c8d5de7c0f99cc41b0ed8c8de618c79bd21e007b5fd5a98387779a7c4213-runc.89iZNg.mount: Deactivated successfully.
Feb 12 22:00:48.297551 kubelet[2071]: E0212 22:00:48.297329    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:49.298160 kubelet[2071]: E0212 22:00:49.298109    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:50.265423 systemd[1]: run-containerd-runc-k8s.io-9a84c8d5de7c0f99cc41b0ed8c8de618c79bd21e007b5fd5a98387779a7c4213-runc.pN9K12.mount: Deactivated successfully.
Feb 12 22:00:50.298853 kubelet[2071]: E0212 22:00:50.298760    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:51.299402 kubelet[2071]: E0212 22:00:51.299362    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:52.300498 kubelet[2071]: E0212 22:00:52.300228    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:53.300982 kubelet[2071]: E0212 22:00:53.300940    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:54.301725 kubelet[2071]: E0212 22:00:54.301648    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:55.302090 kubelet[2071]: E0212 22:00:55.302041    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:56.302923 kubelet[2071]: E0212 22:00:56.302880    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:57.303923 kubelet[2071]: E0212 22:00:57.303849    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:58.304035 kubelet[2071]: E0212 22:00:58.303974    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:00:59.304489 kubelet[2071]: E0212 22:00:59.304432    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:00.304711 kubelet[2071]: E0212 22:01:00.304657    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:01.305949 kubelet[2071]: E0212 22:01:01.305897    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:02.220247 kubelet[2071]: E0212 22:01:02.220189    2071 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:02.306669 kubelet[2071]: E0212 22:01:02.306614    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:03.307489 kubelet[2071]: E0212 22:01:03.307432    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:04.308185 kubelet[2071]: E0212 22:01:04.308130    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:05.308567 kubelet[2071]: E0212 22:01:05.308513    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:05.589728 kubelet[2071]: E0212 22:01:05.589618    2071 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T22:00:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T22:00:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T22:00:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T22:00:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":57035507},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a\\\",\\\"registry.k8s.io/kube-proxy:v1.28.6\\\"],\\\"sizeBytes\\\":26354482},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"172.31.17.60\": Patch \"https://172.31.25.10:6443/api/v1/nodes/172.31.17.60/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 12 22:01:05.593334 systemd[1]: cri-containerd-ea876eb0fa05a3b59b75dd6cf4d0be8e51946e395e9a889d9c20d1708c6ae5a9.scope: Deactivated successfully.
Feb 12 22:01:05.625079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea876eb0fa05a3b59b75dd6cf4d0be8e51946e395e9a889d9c20d1708c6ae5a9-rootfs.mount: Deactivated successfully.
Feb 12 22:01:05.644801 env[1642]: time="2024-02-12T22:01:05.644736036Z" level=info msg="shim disconnected" id=ea876eb0fa05a3b59b75dd6cf4d0be8e51946e395e9a889d9c20d1708c6ae5a9
Feb 12 22:01:05.644801 env[1642]: time="2024-02-12T22:01:05.644798542Z" level=warning msg="cleaning up after shim disconnected" id=ea876eb0fa05a3b59b75dd6cf4d0be8e51946e395e9a889d9c20d1708c6ae5a9 namespace=k8s.io
Feb 12 22:01:05.645403 env[1642]: time="2024-02-12T22:01:05.644818639Z" level=info msg="cleaning up dead shim"
Feb 12 22:01:05.653510 env[1642]: time="2024-02-12T22:01:05.653460696Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:01:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4871 runtime=io.containerd.runc.v2\n"
Feb 12 22:01:05.745418 kubelet[2071]: I0212 22:01:05.744945    2071 scope.go:117] "RemoveContainer" containerID="ea876eb0fa05a3b59b75dd6cf4d0be8e51946e395e9a889d9c20d1708c6ae5a9"
Feb 12 22:01:05.747520 env[1642]: time="2024-02-12T22:01:05.747477479Z" level=info msg="CreateContainer within sandbox \"a8e5227d3d4a1b16e4a6b72ecaa0602bacce6d09f042ae6cd35319d955472ad5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}"
Feb 12 22:01:05.771715 env[1642]: time="2024-02-12T22:01:05.771626403Z" level=info msg="CreateContainer within sandbox \"a8e5227d3d4a1b16e4a6b72ecaa0602bacce6d09f042ae6cd35319d955472ad5\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"18d48cd457ba711a1427c9e4201fbbc9995fc54842f70b03b153c76382fdcbcd\""
Feb 12 22:01:05.772478 env[1642]: time="2024-02-12T22:01:05.772445579Z" level=info msg="StartContainer for \"18d48cd457ba711a1427c9e4201fbbc9995fc54842f70b03b153c76382fdcbcd\""
Feb 12 22:01:05.798484 systemd[1]: Started cri-containerd-18d48cd457ba711a1427c9e4201fbbc9995fc54842f70b03b153c76382fdcbcd.scope.
Feb 12 22:01:05.844920 env[1642]: time="2024-02-12T22:01:05.844165829Z" level=info msg="StartContainer for \"18d48cd457ba711a1427c9e4201fbbc9995fc54842f70b03b153c76382fdcbcd\" returns successfully"
Feb 12 22:01:05.919014 kubelet[2071]: E0212 22:01:05.918824    2071 controller.go:193] "Failed to update lease" err="Put \"https://172.31.25.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.60?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 12 22:01:06.309710 kubelet[2071]: E0212 22:01:06.309578    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:07.309889 kubelet[2071]: E0212 22:01:07.309823    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:08.310824 kubelet[2071]: E0212 22:01:08.310770    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:09.311255 kubelet[2071]: E0212 22:01:09.311202    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:10.311798 kubelet[2071]: E0212 22:01:10.311743    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:11.312481 kubelet[2071]: E0212 22:01:11.312432    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:12.313219 kubelet[2071]: E0212 22:01:12.313176    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:13.313494 kubelet[2071]: E0212 22:01:13.313439    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:14.314272 kubelet[2071]: E0212 22:01:14.314214    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:15.314589 kubelet[2071]: E0212 22:01:15.314547    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:15.590960 kubelet[2071]: E0212 22:01:15.590915    2071 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.17.60\": Get \"https://172.31.25.10:6443/api/v1/nodes/172.31.17.60?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 12 22:01:15.920888 kubelet[2071]: E0212 22:01:15.920460    2071 controller.go:193] "Failed to update lease" err="Put \"https://172.31.25.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.60?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 12 22:01:16.315533 kubelet[2071]: E0212 22:01:16.315301    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:17.315700 kubelet[2071]: E0212 22:01:17.315647    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:18.316030 kubelet[2071]: E0212 22:01:18.315978    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:19.317020 kubelet[2071]: E0212 22:01:19.316962    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:20.317273 kubelet[2071]: E0212 22:01:20.317198    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:21.317640 kubelet[2071]: E0212 22:01:21.317588    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:22.219956 kubelet[2071]: E0212 22:01:22.219901    2071 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:22.250823 env[1642]: time="2024-02-12T22:01:22.250762850Z" level=info msg="StopPodSandbox for \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\""
Feb 12 22:01:22.251382 env[1642]: time="2024-02-12T22:01:22.250899118Z" level=info msg="TearDown network for sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" successfully"
Feb 12 22:01:22.251382 env[1642]: time="2024-02-12T22:01:22.250949217Z" level=info msg="StopPodSandbox for \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" returns successfully"
Feb 12 22:01:22.251904 env[1642]: time="2024-02-12T22:01:22.251863981Z" level=info msg="RemovePodSandbox for \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\""
Feb 12 22:01:22.252009 env[1642]: time="2024-02-12T22:01:22.251911672Z" level=info msg="Forcibly stopping sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\""
Feb 12 22:01:22.252067 env[1642]: time="2024-02-12T22:01:22.252001621Z" level=info msg="TearDown network for sandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" successfully"
Feb 12 22:01:22.263855 env[1642]: time="2024-02-12T22:01:22.263811157Z" level=info msg="RemovePodSandbox \"2020317e6e40efc1aeda81c98190a2e00d9893d80fc1ecf73f7d12d44ede2e3e\" returns successfully"
Feb 12 22:01:22.318738 kubelet[2071]: E0212 22:01:22.318695    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:23.319507 kubelet[2071]: E0212 22:01:23.319453    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:24.320504 kubelet[2071]: E0212 22:01:24.320452    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:25.320836 kubelet[2071]: E0212 22:01:25.320787    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:25.592047 kubelet[2071]: E0212 22:01:25.591951    2071 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.17.60\": Get \"https://172.31.25.10:6443/api/v1/nodes/172.31.17.60?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 12 22:01:25.921395 kubelet[2071]: E0212 22:01:25.921195    2071 controller.go:193] "Failed to update lease" err="Put \"https://172.31.25.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.60?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 12 22:01:26.322519 kubelet[2071]: E0212 22:01:26.322326    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:27.322562 kubelet[2071]: E0212 22:01:27.322510    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:28.323364 kubelet[2071]: E0212 22:01:28.323310    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:29.323717 kubelet[2071]: E0212 22:01:29.323665    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:30.323860 kubelet[2071]: E0212 22:01:30.323803    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:31.324739 kubelet[2071]: E0212 22:01:31.324697    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:32.325745 kubelet[2071]: E0212 22:01:32.325674    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:33.326900 kubelet[2071]: E0212 22:01:33.326842    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:34.327853 kubelet[2071]: E0212 22:01:34.327818    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:35.328313 kubelet[2071]: E0212 22:01:35.328261    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:35.592818 kubelet[2071]: E0212 22:01:35.592546    2071 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.17.60\": Get \"https://172.31.25.10:6443/api/v1/nodes/172.31.17.60?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 12 22:01:35.633984 kubelet[2071]: E0212 22:01:35.633355    2071 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cilium-operator-6bc8ccdb58-qbp6d.17b33c8e56fcf3a1", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"cilium-operator-6bc8ccdb58-qbp6d", UID:"60b0d0e4-d9da-4d50-bfe9-75ff5f890e67", APIVersion:"v1", ResourceVersion:"890", FieldPath:"spec.containers{cilium-operator}"}, Reason:"Pulled", Message:"Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 22, 1, 5, 745990561, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 22, 1, 5, 745990561, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'Post "https://172.31.25.10:6443/api/v1/namespaces/kube-system/events": unexpected EOF'(may retry after sleeping)
Feb 12 22:01:35.635177 kubelet[2071]: E0212 22:01:35.635157    2071 controller.go:193] "Failed to update lease" err="Put \"https://172.31.25.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.60?timeout=10s\": unexpected EOF"
Feb 12 22:01:35.641016 kubelet[2071]: E0212 22:01:35.640095    2071 controller.go:193] "Failed to update lease" err="Put \"https://172.31.25.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.60?timeout=10s\": read tcp 172.31.17.60:40026->172.31.25.10:6443: read: connection reset by peer"
Feb 12 22:01:35.641016 kubelet[2071]: I0212 22:01:35.640129    2071 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
Feb 12 22:01:35.642217 kubelet[2071]: E0212 22:01:35.641403    2071 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.60?timeout=10s\": dial tcp 172.31.25.10:6443: connect: connection refused" interval="200ms"
Feb 12 22:01:35.755525 kubelet[2071]: E0212 22:01:35.755419    2071 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cilium-operator-6bc8ccdb58-qbp6d.17b33c8e56fcf3a1", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"cilium-operator-6bc8ccdb58-qbp6d", UID:"60b0d0e4-d9da-4d50-bfe9-75ff5f890e67", APIVersion:"v1", ResourceVersion:"890", FieldPath:"spec.containers{cilium-operator}"}, Reason:"Pulled", Message:"Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine", Source:v1.EventSource{Component:"kubelet", Host:"172.31.17.60"}, FirstTimestamp:time.Date(2024, time.February, 12, 22, 1, 5, 745990561, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 22, 1, 5, 745990561, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.17.60"}': 'Post "https://172.31.25.10:6443/api/v1/namespaces/kube-system/events": dial tcp 172.31.25.10:6443: connect: connection refused'(may retry after sleeping)
Feb 12 22:01:35.843533 kubelet[2071]: E0212 22:01:35.843382    2071 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.60?timeout=10s\": dial tcp 172.31.25.10:6443: connect: connection refused" interval="400ms"
Feb 12 22:01:36.245291 kubelet[2071]: E0212 22:01:36.245185    2071 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.60?timeout=10s\": dial tcp 172.31.25.10:6443: connect: connection refused" interval="800ms"
Feb 12 22:01:36.328896 kubelet[2071]: E0212 22:01:36.328830    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:36.633889 kubelet[2071]: E0212 22:01:36.633827    2071 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.17.60\": Get \"https://172.31.25.10:6443/api/v1/nodes/172.31.17.60?timeout=10s\": dial tcp 172.31.25.10:6443: connect: connection refused - error from a previous attempt: unexpected EOF"
Feb 12 22:01:36.633889 kubelet[2071]: E0212 22:01:36.633862    2071 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
Feb 12 22:01:36.641028 kubelet[2071]: I0212 22:01:36.640994    2071 status_manager.go:853] "Failed to get status for pod" podUID="60b0d0e4-d9da-4d50-bfe9-75ff5f890e67" pod="kube-system/cilium-operator-6bc8ccdb58-qbp6d" err="Get \"https://172.31.25.10:6443/api/v1/namespaces/kube-system/pods/cilium-operator-6bc8ccdb58-qbp6d\": dial tcp 172.31.25.10:6443: connect: connection refused - error from a previous attempt: unexpected EOF"
Feb 12 22:01:36.643078 kubelet[2071]: I0212 22:01:36.642025    2071 status_manager.go:853] "Failed to get status for pod" podUID="60b0d0e4-d9da-4d50-bfe9-75ff5f890e67" pod="kube-system/cilium-operator-6bc8ccdb58-qbp6d" err="Get \"https://172.31.25.10:6443/api/v1/namespaces/kube-system/pods/cilium-operator-6bc8ccdb58-qbp6d\": dial tcp 172.31.25.10:6443: connect: connection refused"
Feb 12 22:01:36.645320 kubelet[2071]: I0212 22:01:36.645295    2071 status_manager.go:853] "Failed to get status for pod" podUID="60b0d0e4-d9da-4d50-bfe9-75ff5f890e67" pod="kube-system/cilium-operator-6bc8ccdb58-qbp6d" err="Get \"https://172.31.25.10:6443/api/v1/namespaces/kube-system/pods/cilium-operator-6bc8ccdb58-qbp6d\": dial tcp 172.31.25.10:6443: connect: connection refused"
Feb 12 22:01:37.329888 kubelet[2071]: E0212 22:01:37.329807    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:38.330448 kubelet[2071]: E0212 22:01:38.330393    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:39.331297 kubelet[2071]: E0212 22:01:39.331242    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:40.332032 kubelet[2071]: E0212 22:01:40.331964    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:41.332894 kubelet[2071]: E0212 22:01:41.332839    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:42.220064 kubelet[2071]: E0212 22:01:42.220008    2071 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:42.333532 kubelet[2071]: E0212 22:01:42.333494    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:43.334321 kubelet[2071]: E0212 22:01:43.334270    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:44.335195 kubelet[2071]: E0212 22:01:44.335151    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:45.336129 kubelet[2071]: E0212 22:01:45.336083    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:46.336511 kubelet[2071]: E0212 22:01:46.336464    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:47.046493 kubelet[2071]: E0212 22:01:47.046449    2071 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.60?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
Feb 12 22:01:47.336993 kubelet[2071]: E0212 22:01:47.336723    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:48.337320 kubelet[2071]: E0212 22:01:48.337262    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:49.338152 kubelet[2071]: E0212 22:01:49.338102    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:50.339084 kubelet[2071]: E0212 22:01:50.339030    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:51.340040 kubelet[2071]: E0212 22:01:51.339985    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:52.340775 kubelet[2071]: E0212 22:01:52.340717    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:53.341528 kubelet[2071]: E0212 22:01:53.341483    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:54.342668 kubelet[2071]: E0212 22:01:54.342615    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:55.343232 kubelet[2071]: E0212 22:01:55.343191    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:56.344233 kubelet[2071]: E0212 22:01:56.344182    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:56.985268 kubelet[2071]: E0212 22:01:56.985226    2071 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.17.60\": Get \"https://172.31.25.10:6443/api/v1/nodes/172.31.17.60?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Feb 12 22:01:57.344417 kubelet[2071]: E0212 22:01:57.344363    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 22:01:58.344916 kubelet[2071]: E0212 22:01:58.344852    2071 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"