Sep 13 00:44:55.005699 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:44:55.005733 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:44:55.005752 kernel: BIOS-provided physical RAM map: Sep 13 00:44:55.005763 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:44:55.005774 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 13 00:44:55.005785 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 13 00:44:55.005799 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:44:55.005810 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:44:55.005824 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:44:55.005836 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:44:55.005848 kernel: NX (Execute Disable) protection: active Sep 13 00:44:55.005859 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 13 00:44:55.005871 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 13 00:44:55.005883 kernel: extended physical RAM map: Sep 13 00:44:55.005900 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:44:55.005913 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable Sep 13 00:44:55.005925 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable Sep 13 00:44:55.005938 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable Sep 13 00:44:55.005950 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 13 00:44:55.005963 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:44:55.005975 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:44:55.005987 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:44:55.006000 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:44:55.006012 kernel: efi: EFI v2.70 by EDK II Sep 13 00:44:55.006027 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77004a98 Sep 13 00:44:55.006039 kernel: SMBIOS 2.7 present. Sep 13 00:44:55.006052 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 13 00:44:55.006064 kernel: Hypervisor detected: KVM Sep 13 00:44:55.006076 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:44:55.006088 kernel: kvm-clock: cpu 0, msr 3319f001, primary cpu clock Sep 13 00:44:55.006100 kernel: kvm-clock: using sched offset of 4131082543 cycles Sep 13 00:44:55.006113 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:44:55.006126 kernel: tsc: Detected 2499.998 MHz processor Sep 13 00:44:55.006139 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:44:55.006151 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:44:55.006167 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 13 00:44:55.006179 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:44:55.006191 kernel: Using GB pages for direct mapping Sep 13 00:44:55.006204 kernel: Secure boot disabled Sep 13 00:44:55.006217 kernel: ACPI: Early table checksum verification disabled Sep 13 00:44:55.006235 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 13 00:44:55.006249 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 13 00:44:55.006265 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 13 00:44:55.006279 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 13 00:44:55.006292 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 13 00:44:55.006306 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 13 00:44:55.006320 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 13 00:44:55.006333 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 13 00:44:55.006347 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 13 00:44:55.006362 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 13 00:44:55.006376 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:44:55.006389 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:44:55.006402 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 13 00:44:55.006415 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 13 00:44:55.006428 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 13 00:44:55.006441 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 13 00:44:55.006467 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 13 00:44:55.006480 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 13 00:44:55.006497 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 13 00:44:55.006510 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 13 00:44:55.006524 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 13 00:44:55.006537 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 13 00:44:55.006550 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 13 00:44:55.006563 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 13 00:44:55.006576 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:44:55.006589 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:44:55.006602 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 13 00:44:55.006619 kernel: NUMA: Initialized distance table, cnt=1 Sep 13 00:44:55.006631 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 13 00:44:55.006645 kernel: Zone ranges: Sep 13 00:44:55.006658 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:44:55.006672 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 13 00:44:55.006685 kernel: Normal empty Sep 13 00:44:55.006698 kernel: Movable zone start for each node Sep 13 00:44:55.006710 kernel: Early memory node ranges Sep 13 00:44:55.006724 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:44:55.006740 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 13 00:44:55.006753 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 13 00:44:55.006766 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 13 00:44:55.006780 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:44:55.006793 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:44:55.006806 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:44:55.006819 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 13 00:44:55.006832 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:44:55.006845 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:44:55.006861 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 13 00:44:55.006877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:44:55.006889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:44:55.006902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:44:55.006914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:44:55.006927 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:44:55.006939 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:44:55.006952 kernel: TSC deadline timer available Sep 13 00:44:55.006964 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:44:55.006979 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 13 00:44:55.006992 kernel: Booting paravirtualized kernel on KVM Sep 13 00:44:55.007005 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:44:55.007017 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:44:55.007030 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 13 00:44:55.007043 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 13 00:44:55.007056 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:44:55.007068 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 Sep 13 00:44:55.007081 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:44:55.007097 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:44:55.007109 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 13 00:44:55.007122 kernel: Policy zone: DMA32 Sep 13 00:44:55.007137 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:44:55.007150 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:44:55.007163 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:44:55.007177 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:44:55.007190 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:44:55.007205 kernel: Memory: 1876640K/2037804K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 160904K reserved, 0K cma-reserved) Sep 13 00:44:55.007219 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:44:55.007231 kernel: Kernel/User page tables isolation: enabled Sep 13 00:44:55.007244 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:44:55.007256 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:44:55.007269 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:44:55.007282 kernel: rcu: RCU event tracing is enabled. Sep 13 00:44:55.007308 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:44:55.007322 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:44:55.007335 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:44:55.007349 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:44:55.007362 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:44:55.007378 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:44:55.007392 kernel: random: crng init done Sep 13 00:44:55.007404 kernel: Console: colour dummy device 80x25 Sep 13 00:44:55.007418 kernel: printk: console [tty0] enabled Sep 13 00:44:55.007431 kernel: printk: console [ttyS0] enabled Sep 13 00:44:55.007445 kernel: ACPI: Core revision 20210730 Sep 13 00:44:55.009523 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 13 00:44:55.009546 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:44:55.009561 kernel: x2apic enabled Sep 13 00:44:55.009576 kernel: Switched APIC routing to physical x2apic. Sep 13 00:44:55.009591 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 13 00:44:55.009606 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 13 00:44:55.009621 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:44:55.009636 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:44:55.009653 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:44:55.009668 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:44:55.009682 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:44:55.009696 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:44:55.009711 kernel: RETBleed: Vulnerable Sep 13 00:44:55.009725 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:44:55.009740 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:44:55.009754 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:44:55.009768 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 00:44:55.009782 kernel: active return thunk: its_return_thunk Sep 13 00:44:55.009796 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:44:55.009813 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:44:55.009828 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:44:55.009842 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:44:55.009856 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 00:44:55.009870 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 00:44:55.009885 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:44:55.009899 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:44:55.009913 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:44:55.009927 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 13 00:44:55.009942 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:44:55.009956 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 00:44:55.009973 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 00:44:55.009987 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 13 00:44:55.010001 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 13 00:44:55.010014 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 13 00:44:55.010028 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 13 00:44:55.010043 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 13 00:44:55.010057 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:44:55.010071 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:44:55.010085 kernel: LSM: Security Framework initializing Sep 13 00:44:55.010099 kernel: SELinux: Initializing. Sep 13 00:44:55.010114 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:44:55.010131 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:44:55.010146 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 13 00:44:55.010160 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 13 00:44:55.010175 kernel: signal: max sigframe size: 3632 Sep 13 00:44:55.010189 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:44:55.010204 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:44:55.010218 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:44:55.010232 kernel: x86: Booting SMP configuration: Sep 13 00:44:55.010247 kernel: .... node #0, CPUs: #1 Sep 13 00:44:55.010261 kernel: kvm-clock: cpu 1, msr 3319f041, secondary cpu clock Sep 13 00:44:55.010279 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 Sep 13 00:44:55.010294 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:44:55.010310 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:44:55.010324 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:44:55.010339 kernel: smpboot: Max logical packages: 1 Sep 13 00:44:55.010353 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 13 00:44:55.010368 kernel: devtmpfs: initialized Sep 13 00:44:55.010382 kernel: x86/mm: Memory block size: 128MB Sep 13 00:44:55.010400 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 13 00:44:55.010414 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:44:55.010429 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:44:55.010444 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:44:55.010471 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:44:55.010483 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:44:55.010493 kernel: audit: type=2000 audit(1757724294.865:1): state=initialized audit_enabled=0 res=1 Sep 13 00:44:55.010504 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:44:55.010516 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:44:55.019485 kernel: cpuidle: using governor menu Sep 13 00:44:55.019522 kernel: ACPI: bus type PCI registered Sep 13 00:44:55.019539 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:44:55.019554 kernel: dca service started, version 1.12.1 Sep 13 00:44:55.019569 kernel: PCI: Using configuration type 1 for base access Sep 13 00:44:55.019584 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:44:55.019600 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:44:55.019615 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:44:55.019629 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:44:55.019650 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:44:55.019665 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:44:55.019680 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:44:55.019695 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:44:55.019709 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:44:55.019724 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:44:55.019739 kernel: ACPI: Interpreter enabled Sep 13 00:44:55.019753 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:44:55.019768 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:44:55.019786 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:44:55.019801 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:44:55.019816 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:44:55.020030 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:44:55.020163 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 13 00:44:55.020182 kernel: acpiphp: Slot [3] registered Sep 13 00:44:55.020196 kernel: acpiphp: Slot [4] registered Sep 13 00:44:55.020211 kernel: acpiphp: Slot [5] registered Sep 13 00:44:55.020246 kernel: acpiphp: Slot [6] registered Sep 13 00:44:55.020261 kernel: acpiphp: Slot [7] registered Sep 13 00:44:55.020276 kernel: acpiphp: Slot [8] registered Sep 13 00:44:55.020290 kernel: acpiphp: Slot [9] registered Sep 13 00:44:55.020305 kernel: acpiphp: Slot [10] registered Sep 13 00:44:55.020320 kernel: acpiphp: Slot [11] registered Sep 13 00:44:55.020334 kernel: acpiphp: Slot [12] registered Sep 13 00:44:55.020349 kernel: acpiphp: Slot [13] registered Sep 13 00:44:55.020363 kernel: acpiphp: Slot [14] registered Sep 13 00:44:55.020380 kernel: acpiphp: Slot [15] registered Sep 13 00:44:55.020395 kernel: acpiphp: Slot [16] registered Sep 13 00:44:55.020410 kernel: acpiphp: Slot [17] registered Sep 13 00:44:55.020424 kernel: acpiphp: Slot [18] registered Sep 13 00:44:55.020439 kernel: acpiphp: Slot [19] registered Sep 13 00:44:55.020467 kernel: acpiphp: Slot [20] registered Sep 13 00:44:55.020481 kernel: acpiphp: Slot [21] registered Sep 13 00:44:55.020502 kernel: acpiphp: Slot [22] registered Sep 13 00:44:55.020513 kernel: acpiphp: Slot [23] registered Sep 13 00:44:55.020525 kernel: acpiphp: Slot [24] registered Sep 13 00:44:55.020543 kernel: acpiphp: Slot [25] registered Sep 13 00:44:55.020558 kernel: acpiphp: Slot [26] registered Sep 13 00:44:55.020572 kernel: acpiphp: Slot [27] registered Sep 13 00:44:55.020586 kernel: acpiphp: Slot [28] registered Sep 13 00:44:55.020602 kernel: acpiphp: Slot [29] registered Sep 13 00:44:55.020616 kernel: acpiphp: Slot [30] registered Sep 13 00:44:55.020631 kernel: acpiphp: Slot [31] registered Sep 13 00:44:55.020646 kernel: PCI host bridge to bus 0000:00 Sep 13 00:44:55.020783 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:44:55.020908 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:44:55.021025 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:44:55.021140 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:44:55.021256 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 13 00:44:55.021371 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:44:55.021531 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:44:55.021676 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:44:55.021822 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 13 00:44:55.021952 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:44:55.022082 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 13 00:44:55.022214 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 13 00:44:55.022351 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 13 00:44:55.022488 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 13 00:44:55.022620 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 13 00:44:55.022745 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 13 00:44:55.022878 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 13 00:44:55.023003 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 13 00:44:55.023150 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:44:55.023279 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 13 00:44:55.023409 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:44:55.023572 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 13 00:44:55.023705 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 13 00:44:55.023842 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 13 00:44:55.023971 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 13 00:44:55.023991 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:44:55.024007 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:44:55.024022 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:44:55.024040 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:44:55.024055 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:44:55.024070 kernel: iommu: Default domain type: Translated Sep 13 00:44:55.024084 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:44:55.024214 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 13 00:44:55.024347 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:44:55.027566 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 13 00:44:55.027602 kernel: vgaarb: loaded Sep 13 00:44:55.027623 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:44:55.027639 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:44:55.027654 kernel: PTP clock support registered Sep 13 00:44:55.027669 kernel: Registered efivars operations Sep 13 00:44:55.027684 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:44:55.027699 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:44:55.027714 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] Sep 13 00:44:55.027728 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 13 00:44:55.027742 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 13 00:44:55.027760 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 13 00:44:55.027775 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 13 00:44:55.027790 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:44:55.027805 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:44:55.027820 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:44:55.027835 kernel: pnp: PnP ACPI init Sep 13 00:44:55.027850 kernel: pnp: PnP ACPI: found 5 devices Sep 13 00:44:55.027864 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:44:55.027879 kernel: NET: Registered PF_INET protocol family Sep 13 00:44:55.027897 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:44:55.027912 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:44:55.027927 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:44:55.027941 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:44:55.027956 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 00:44:55.027971 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:44:55.027986 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:44:55.028001 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:44:55.028016 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:44:55.028033 kernel: NET: Registered PF_XDP protocol family Sep 13 00:44:55.028174 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:44:55.028294 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:44:55.028410 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:44:55.028548 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:44:55.028666 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 13 00:44:55.028805 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:44:55.028939 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 13 00:44:55.028963 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:44:55.028980 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:44:55.028995 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 13 00:44:55.029011 kernel: clocksource: Switched to clocksource tsc Sep 13 00:44:55.029025 kernel: Initialise system trusted keyrings Sep 13 00:44:55.029040 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:44:55.029055 kernel: Key type asymmetric registered Sep 13 00:44:55.029070 kernel: Asymmetric key parser 'x509' registered Sep 13 00:44:55.029087 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:44:55.029102 kernel: io scheduler mq-deadline registered Sep 13 00:44:55.029117 kernel: io scheduler kyber registered Sep 13 00:44:55.029131 kernel: io scheduler bfq registered Sep 13 00:44:55.029146 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:44:55.029161 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:44:55.029176 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:44:55.029190 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:44:55.029205 kernel: i8042: Warning: Keylock active Sep 13 00:44:55.029222 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:44:55.029237 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:44:55.029376 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:44:55.031569 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:44:55.038447 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:44:54 UTC (1757724294) Sep 13 00:44:55.038608 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:44:55.038628 kernel: intel_pstate: CPU model not supported Sep 13 00:44:55.038644 kernel: efifb: probing for efifb Sep 13 00:44:55.038664 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 13 00:44:55.038679 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 13 00:44:55.038694 kernel: efifb: scrolling: redraw Sep 13 00:44:55.038709 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:44:55.038723 kernel: Console: switching to colour frame buffer device 100x37 Sep 13 00:44:55.038739 kernel: fb0: EFI VGA frame buffer device Sep 13 00:44:55.038779 kernel: pstore: Registered efi as persistent store backend Sep 13 00:44:55.038797 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:44:55.038813 kernel: Segment Routing with IPv6 Sep 13 00:44:55.038831 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:44:55.038846 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:44:55.038862 kernel: Key type dns_resolver registered Sep 13 00:44:55.038878 kernel: IPI shorthand broadcast: enabled Sep 13 00:44:55.038894 kernel: sched_clock: Marking stable (341052999, 135606188)->(560566473, -83907286) Sep 13 00:44:55.038910 kernel: registered taskstats version 1 Sep 13 00:44:55.038925 kernel: Loading compiled-in X.509 certificates Sep 13 00:44:55.038940 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:44:55.038955 kernel: Key type .fscrypt registered Sep 13 00:44:55.038974 kernel: Key type fscrypt-provisioning registered Sep 13 00:44:55.038989 kernel: pstore: Using crash dump compression: deflate Sep 13 00:44:55.039005 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:44:55.039021 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:44:55.039037 kernel: ima: No architecture policies found Sep 13 00:44:55.039052 kernel: clk: Disabling unused clocks Sep 13 00:44:55.039068 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:44:55.039084 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:44:55.039100 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:44:55.039119 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:44:55.039137 kernel: Run /init as init process Sep 13 00:44:55.039154 kernel: with arguments: Sep 13 00:44:55.039169 kernel: /init Sep 13 00:44:55.039184 kernel: with environment: Sep 13 00:44:55.039200 kernel: HOME=/ Sep 13 00:44:55.039216 kernel: TERM=linux Sep 13 00:44:55.039232 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:44:55.039251 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:44:55.039273 systemd[1]: Detected virtualization amazon. Sep 13 00:44:55.039289 systemd[1]: Detected architecture x86-64. Sep 13 00:44:55.039305 systemd[1]: Running in initrd. Sep 13 00:44:55.039321 systemd[1]: No hostname configured, using default hostname. Sep 13 00:44:55.039337 systemd[1]: Hostname set to . Sep 13 00:44:55.039354 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:44:55.039371 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:44:55.039390 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:44:55.039406 systemd[1]: Reached target cryptsetup.target. Sep 13 00:44:55.039422 systemd[1]: Reached target paths.target. Sep 13 00:44:55.039437 systemd[1]: Reached target slices.target. Sep 13 00:44:55.039466 systemd[1]: Reached target swap.target. Sep 13 00:44:55.039486 systemd[1]: Reached target timers.target. Sep 13 00:44:55.039503 systemd[1]: Listening on iscsid.socket. Sep 13 00:44:55.039522 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:44:55.039538 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:44:55.039555 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:44:55.039571 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:44:55.039587 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:44:55.039603 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:44:55.039622 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:44:55.039638 systemd[1]: Reached target sockets.target. Sep 13 00:44:55.039654 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:44:55.039670 systemd[1]: Finished network-cleanup.service. Sep 13 00:44:55.039686 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:44:55.039703 systemd[1]: Starting systemd-journald.service... Sep 13 00:44:55.039719 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:44:55.039736 systemd[1]: Starting systemd-resolved.service... Sep 13 00:44:55.039752 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:44:55.039771 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:44:55.039788 kernel: audit: type=1130 audit(1757724294.992:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.039804 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:44:55.039819 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:44:55.039836 kernel: audit: type=1130 audit(1757724295.010:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.039852 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:44:55.039869 kernel: audit: type=1130 audit(1757724295.031:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.039891 systemd-journald[185]: Journal started Sep 13 00:44:55.039971 systemd-journald[185]: Runtime Journal (/run/log/journal/ec241c0b269235c8842fd81c17c172c2) is 4.8M, max 38.3M, 33.5M free. Sep 13 00:44:54.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.000522 systemd-modules-load[186]: Inserted module 'overlay' Sep 13 00:44:55.049363 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:44:55.064476 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:44:55.068474 systemd[1]: Started systemd-journald.service. Sep 13 00:44:55.069972 systemd-resolved[187]: Positive Trust Anchors: Sep 13 00:44:55.073259 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:44:55.076266 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:44:55.118301 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:44:55.118366 kernel: Bridge firewalling registered Sep 13 00:44:55.118394 kernel: audit: type=1130 audit(1757724295.093:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.118412 kernel: audit: type=1130 audit(1757724295.109:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.087726 systemd-modules-load[186]: Inserted module 'br_netfilter' Sep 13 00:44:55.143842 kernel: SCSI subsystem initialized Sep 13 00:44:55.143877 kernel: audit: type=1130 audit(1757724295.121:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.143899 kernel: audit: type=1130 audit(1757724295.129:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.143925 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:44:55.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.094974 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:44:55.106676 systemd-resolved[187]: Defaulting to hostname 'linux'. Sep 13 00:44:55.110503 systemd[1]: Started systemd-resolved.service. Sep 13 00:44:55.172613 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:44:55.172648 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:44:55.172669 kernel: audit: type=1130 audit(1757724295.160:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.172749 dracut-cmdline[203]: dracut-dracut-053 Sep 13 00:44:55.172749 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:44:55.121881 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:44:55.191219 kernel: audit: type=1130 audit(1757724295.182:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.130354 systemd[1]: Reached target nss-lookup.target. Sep 13 00:44:55.139771 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:44:55.158623 systemd-modules-load[186]: Inserted module 'dm_multipath' Sep 13 00:44:55.159566 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:44:55.161868 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:44:55.178075 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:44:55.239485 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:44:55.258487 kernel: iscsi: registered transport (tcp) Sep 13 00:44:55.283348 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:44:55.283430 kernel: QLogic iSCSI HBA Driver Sep 13 00:44:55.315765 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:44:55.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.317713 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:44:55.369504 kernel: raid6: avx512x4 gen() 16532 MB/s Sep 13 00:44:55.387504 kernel: raid6: avx512x4 xor() 7838 MB/s Sep 13 00:44:55.405501 kernel: raid6: avx512x2 gen() 16738 MB/s Sep 13 00:44:55.423483 kernel: raid6: avx512x2 xor() 24251 MB/s Sep 13 00:44:55.441492 kernel: raid6: avx512x1 gen() 16549 MB/s Sep 13 00:44:55.459485 kernel: raid6: avx512x1 xor() 21916 MB/s Sep 13 00:44:55.477502 kernel: raid6: avx2x4 gen() 15864 MB/s Sep 13 00:44:55.495485 kernel: raid6: avx2x4 xor() 7444 MB/s Sep 13 00:44:55.513502 kernel: raid6: avx2x2 gen() 15665 MB/s Sep 13 00:44:55.531485 kernel: raid6: avx2x2 xor() 18166 MB/s Sep 13 00:44:55.549505 kernel: raid6: avx2x1 gen() 11586 MB/s Sep 13 00:44:55.567477 kernel: raid6: avx2x1 xor() 15763 MB/s Sep 13 00:44:55.585484 kernel: raid6: sse2x4 gen() 9508 MB/s Sep 13 00:44:55.603476 kernel: raid6: sse2x4 xor() 6135 MB/s Sep 13 00:44:55.621480 kernel: raid6: sse2x2 gen() 10511 MB/s Sep 13 00:44:55.639474 kernel: raid6: sse2x2 xor() 6102 MB/s Sep 13 00:44:55.657479 kernel: raid6: sse2x1 gen() 9443 MB/s Sep 13 00:44:55.675714 kernel: raid6: sse2x1 xor() 4825 MB/s Sep 13 00:44:55.675756 kernel: raid6: using algorithm avx512x2 gen() 16738 MB/s Sep 13 00:44:55.675776 kernel: raid6: .... xor() 24251 MB/s, rmw enabled Sep 13 00:44:55.676813 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:44:55.691483 kernel: xor: automatically using best checksumming function avx Sep 13 00:44:55.794481 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:44:55.803767 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:44:55.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.804000 audit: BPF prog-id=7 op=LOAD Sep 13 00:44:55.804000 audit: BPF prog-id=8 op=LOAD Sep 13 00:44:55.805266 systemd[1]: Starting systemd-udevd.service... Sep 13 00:44:55.818848 systemd-udevd[385]: Using default interface naming scheme 'v252'. Sep 13 00:44:55.824217 systemd[1]: Started systemd-udevd.service. Sep 13 00:44:55.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.826845 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:44:55.846439 dracut-pre-trigger[393]: rd.md=0: removing MD RAID activation Sep 13 00:44:55.877956 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:44:55.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.879420 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:44:55.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:55.923479 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:44:55.975480 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:44:55.993584 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:44:55.993650 kernel: AES CTR mode by8 optimization enabled Sep 13 00:44:56.019934 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 13 00:44:56.035813 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 13 00:44:56.035982 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 13 00:44:56.036134 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 13 00:44:56.036301 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:44:56.036322 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:2a:47:1b:da:e7 Sep 13 00:44:56.047484 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 13 00:44:56.056498 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:44:56.056567 kernel: GPT:9289727 != 16777215 Sep 13 00:44:56.056587 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:44:56.056606 kernel: GPT:9289727 != 16777215 Sep 13 00:44:56.056623 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:44:56.058901 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:44:56.066288 (udev-worker)[436]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:44:56.122499 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (441) Sep 13 00:44:56.169080 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:44:56.194819 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:44:56.213372 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:44:56.222650 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:44:56.223296 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:44:56.225787 systemd[1]: Starting disk-uuid.service... Sep 13 00:44:56.235583 disk-uuid[593]: Primary Header is updated. Sep 13 00:44:56.235583 disk-uuid[593]: Secondary Entries is updated. Sep 13 00:44:56.235583 disk-uuid[593]: Secondary Header is updated. Sep 13 00:44:56.243476 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:44:56.249471 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:44:56.255474 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:44:57.255481 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:44:57.255867 disk-uuid[594]: The operation has completed successfully. Sep 13 00:44:57.381350 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:44:57.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:57.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:57.381512 systemd[1]: Finished disk-uuid.service. Sep 13 00:44:57.388979 systemd[1]: Starting verity-setup.service... Sep 13 00:44:57.415678 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:44:57.504002 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:44:57.506494 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:44:57.511748 systemd[1]: Finished verity-setup.service. Sep 13 00:44:57.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:57.596597 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:44:57.596722 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:44:57.597440 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:44:57.598194 systemd[1]: Starting ignition-setup.service... Sep 13 00:44:57.600827 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:44:57.622139 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:44:57.622213 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:44:57.622226 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:44:57.663480 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:44:57.674631 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:44:57.678121 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:44:57.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:57.679000 audit: BPF prog-id=9 op=LOAD Sep 13 00:44:57.680037 systemd[1]: Starting systemd-networkd.service... Sep 13 00:44:57.686525 systemd[1]: Finished ignition-setup.service. Sep 13 00:44:57.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:57.688715 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:44:57.702189 systemd-networkd[1103]: lo: Link UP Sep 13 00:44:57.702204 systemd-networkd[1103]: lo: Gained carrier Sep 13 00:44:57.702674 systemd-networkd[1103]: Enumeration completed Sep 13 00:44:57.702766 systemd[1]: Started systemd-networkd.service. Sep 13 00:44:57.702965 systemd-networkd[1103]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:44:57.706037 systemd-networkd[1103]: eth0: Link UP Sep 13 00:44:57.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:57.706041 systemd-networkd[1103]: eth0: Gained carrier Sep 13 00:44:57.706751 systemd[1]: Reached target network.target. Sep 13 00:44:57.708770 systemd[1]: Starting iscsiuio.service... Sep 13 00:44:57.715757 systemd[1]: Started iscsiuio.service. Sep 13 00:44:57.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:57.717258 systemd[1]: Starting iscsid.service... Sep 13 00:44:57.719269 systemd-networkd[1103]: eth0: DHCPv4 address 172.31.31.206/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:44:57.722084 iscsid[1110]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:44:57.722084 iscsid[1110]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:44:57.722084 iscsid[1110]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:44:57.722084 iscsid[1110]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:44:57.722084 iscsid[1110]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:44:57.722084 iscsid[1110]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:44:57.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:57.723230 systemd[1]: Started iscsid.service. Sep 13 00:44:57.727006 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:44:57.738831 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:44:57.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:57.739631 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:44:57.740757 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:44:57.741922 systemd[1]: Reached target remote-fs.target. Sep 13 00:44:57.743873 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:44:57.753098 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:44:57.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:58.293328 ignition[1105]: Ignition 2.14.0 Sep 13 00:44:58.293342 ignition[1105]: Stage: fetch-offline Sep 13 00:44:58.293489 ignition[1105]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:44:58.293522 ignition[1105]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:44:58.310601 ignition[1105]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:44:58.311157 ignition[1105]: Ignition finished successfully Sep 13 00:44:58.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:58.312325 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:44:58.313984 systemd[1]: Starting ignition-fetch.service... Sep 13 00:44:58.324024 ignition[1129]: Ignition 2.14.0 Sep 13 00:44:58.324034 ignition[1129]: Stage: fetch Sep 13 00:44:58.324193 ignition[1129]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:44:58.324216 ignition[1129]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:44:58.330294 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:44:58.330958 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:44:58.390415 ignition[1129]: INFO : PUT result: OK Sep 13 00:44:58.400951 ignition[1129]: DEBUG : parsed url from cmdline: "" Sep 13 00:44:58.400951 ignition[1129]: INFO : no config URL provided Sep 13 00:44:58.400951 ignition[1129]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:44:58.400951 ignition[1129]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 13 00:44:58.403171 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:44:58.403171 ignition[1129]: INFO : PUT result: OK Sep 13 00:44:58.403171 ignition[1129]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 13 00:44:58.405178 ignition[1129]: INFO : GET result: OK Sep 13 00:44:58.405178 ignition[1129]: DEBUG : parsing config with SHA512: 06c6668817c59b1f50542c5578b69aa34d4e957ef370466f9a26a4c0a20893cf80649d301788895c3750261909bb6d2b489c0b235477ae8b3b3930ee6e473a00 Sep 13 00:44:58.408511 unknown[1129]: fetched base config from "system" Sep 13 00:44:58.408526 unknown[1129]: fetched base config from "system" Sep 13 00:44:58.408535 unknown[1129]: fetched user config from "aws" Sep 13 00:44:58.409585 ignition[1129]: fetch: fetch complete Sep 13 00:44:58.409591 ignition[1129]: fetch: fetch passed Sep 13 00:44:58.409633 ignition[1129]: Ignition finished successfully Sep 13 00:44:58.412422 systemd[1]: Finished ignition-fetch.service. Sep 13 00:44:58.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:58.414130 systemd[1]: Starting ignition-kargs.service... Sep 13 00:44:58.424110 ignition[1135]: Ignition 2.14.0 Sep 13 00:44:58.424125 ignition[1135]: Stage: kargs Sep 13 00:44:58.424335 ignition[1135]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:44:58.424368 ignition[1135]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:44:58.431770 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:44:58.432645 ignition[1135]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:44:58.433718 ignition[1135]: INFO : PUT result: OK Sep 13 00:44:58.436253 ignition[1135]: kargs: kargs passed Sep 13 00:44:58.436307 ignition[1135]: Ignition finished successfully Sep 13 00:44:58.437440 systemd[1]: Finished ignition-kargs.service. Sep 13 00:44:58.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:58.438824 systemd[1]: Starting ignition-disks.service... Sep 13 00:44:58.447283 ignition[1141]: Ignition 2.14.0 Sep 13 00:44:58.447296 ignition[1141]: Stage: disks Sep 13 00:44:58.447439 ignition[1141]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:44:58.447473 ignition[1141]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:44:58.453268 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:44:58.453838 ignition[1141]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:44:58.455004 ignition[1141]: INFO : PUT result: OK Sep 13 00:44:58.457914 ignition[1141]: disks: disks passed Sep 13 00:44:58.457966 ignition[1141]: Ignition finished successfully Sep 13 00:44:58.459770 systemd[1]: Finished ignition-disks.service. Sep 13 00:44:58.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:58.460404 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:44:58.461338 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:44:58.462169 systemd[1]: Reached target local-fs.target. Sep 13 00:44:58.462995 systemd[1]: Reached target sysinit.target. Sep 13 00:44:58.463841 systemd[1]: Reached target basic.target. Sep 13 00:44:58.465770 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:44:58.503268 systemd-fsck[1150]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:44:58.506436 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:44:58.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:58.508213 systemd[1]: Mounting sysroot.mount... Sep 13 00:44:58.528474 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:44:58.529171 systemd[1]: Mounted sysroot.mount. Sep 13 00:44:58.530411 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:44:58.539503 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:44:58.541414 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:44:58.542141 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:44:58.542171 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:44:58.543961 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:44:58.562788 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:44:58.565001 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:44:58.577771 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:44:58.584998 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1167) Sep 13 00:44:58.585065 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:44:58.588424 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:44:58.588633 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:44:58.592699 initrd-setup-root[1193]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:44:58.597134 initrd-setup-root[1204]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:44:58.601483 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:44:58.603318 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:44:58.610717 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:44:58.829526 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:44:58.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:58.830992 systemd[1]: Starting ignition-mount.service... Sep 13 00:44:58.832265 systemd[1]: Starting sysroot-boot.service... Sep 13 00:44:58.840835 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 00:44:58.840964 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 00:44:58.855184 ignition[1232]: INFO : Ignition 2.14.0 Sep 13 00:44:58.856664 ignition[1232]: INFO : Stage: mount Sep 13 00:44:58.857990 ignition[1232]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:44:58.860036 ignition[1232]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:44:58.872649 ignition[1232]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:44:58.873767 ignition[1232]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:44:58.876377 ignition[1232]: INFO : PUT result: OK Sep 13 00:44:58.877819 systemd[1]: Finished sysroot-boot.service. Sep 13 00:44:58.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:58.881121 ignition[1232]: INFO : mount: mount passed Sep 13 00:44:58.881760 ignition[1232]: INFO : Ignition finished successfully Sep 13 00:44:58.883198 systemd[1]: Finished ignition-mount.service. Sep 13 00:44:58.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:58.885043 systemd[1]: Starting ignition-files.service... Sep 13 00:44:58.893526 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:44:58.913471 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1242) Sep 13 00:44:58.916764 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:44:58.916832 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:44:58.916845 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:44:58.931486 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:44:58.935067 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:44:58.945971 ignition[1261]: INFO : Ignition 2.14.0 Sep 13 00:44:58.945971 ignition[1261]: INFO : Stage: files Sep 13 00:44:58.947338 ignition[1261]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:44:58.947338 ignition[1261]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:44:58.953134 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:44:58.953826 ignition[1261]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:44:58.954421 ignition[1261]: INFO : PUT result: OK Sep 13 00:44:58.956983 ignition[1261]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:44:58.962068 ignition[1261]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:44:58.962068 ignition[1261]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:44:58.977356 ignition[1261]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:44:58.978589 ignition[1261]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:44:58.981009 unknown[1261]: wrote ssh authorized keys file for user: core Sep 13 00:44:58.981904 ignition[1261]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:44:58.983648 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:44:58.984991 ignition[1261]: INFO : GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 00:44:59.072177 ignition[1261]: INFO : GET result: OK Sep 13 00:44:59.240816 systemd-networkd[1103]: eth0: Gained IPv6LL Sep 13 00:44:59.303311 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:44:59.307936 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:44:59.307936 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:44:59.307936 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:44:59.307936 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:44:59.330359 ignition[1261]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3674709621" Sep 13 00:44:59.330359 ignition[1261]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3674709621": device or resource busy Sep 13 00:44:59.330359 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3674709621", trying btrfs: device or resource busy Sep 13 00:44:59.330359 ignition[1261]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3674709621" Sep 13 00:44:59.330359 ignition[1261]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3674709621" Sep 13 00:44:59.341890 ignition[1261]: INFO : op(3): [started] unmounting "/mnt/oem3674709621" Sep 13 00:44:59.342979 ignition[1261]: INFO : op(3): [finished] unmounting "/mnt/oem3674709621" Sep 13 00:44:59.342979 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:44:59.342979 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:44:59.342979 ignition[1261]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:44:59.536049 ignition[1261]: INFO : GET result: OK Sep 13 00:44:59.647462 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:44:59.649532 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:44:59.649532 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:44:59.649532 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:44:59.649532 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:44:59.649532 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:44:59.649532 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:44:59.649532 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:44:59.649532 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:44:59.649532 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:44:59.649532 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:44:59.649532 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:44:59.649532 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:44:59.684598 ignition[1261]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3229564428" Sep 13 00:44:59.684598 ignition[1261]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3229564428": device or resource busy Sep 13 00:44:59.684598 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3229564428", trying btrfs: device or resource busy Sep 13 00:44:59.684598 ignition[1261]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3229564428" Sep 13 00:44:59.684598 ignition[1261]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3229564428" Sep 13 00:44:59.684598 ignition[1261]: INFO : op(6): [started] unmounting "/mnt/oem3229564428" Sep 13 00:44:59.684598 ignition[1261]: INFO : op(6): [finished] unmounting "/mnt/oem3229564428" Sep 13 00:44:59.684598 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:44:59.684598 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:44:59.684598 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:44:59.669416 systemd[1]: mnt-oem3229564428.mount: Deactivated successfully. Sep 13 00:44:59.710399 ignition[1261]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem535369333" Sep 13 00:44:59.710399 ignition[1261]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem535369333": device or resource busy Sep 13 00:44:59.710399 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem535369333", trying btrfs: device or resource busy Sep 13 00:44:59.710399 ignition[1261]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem535369333" Sep 13 00:44:59.710399 ignition[1261]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem535369333" Sep 13 00:44:59.710399 ignition[1261]: INFO : op(9): [started] unmounting "/mnt/oem535369333" Sep 13 00:44:59.710399 ignition[1261]: INFO : op(9): [finished] unmounting "/mnt/oem535369333" Sep 13 00:44:59.710399 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:44:59.710399 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:44:59.710399 ignition[1261]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 00:44:59.696071 systemd[1]: mnt-oem535369333.mount: Deactivated successfully. Sep 13 00:45:00.093958 ignition[1261]: INFO : GET result: OK Sep 13 00:45:00.467857 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:45:00.482623 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:45:00.482623 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:45:00.503914 ignition[1261]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4198500804" Sep 13 00:45:00.514278 ignition[1261]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4198500804": device or resource busy Sep 13 00:45:00.514278 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4198500804", trying btrfs: device or resource busy Sep 13 00:45:00.514278 ignition[1261]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4198500804" Sep 13 00:45:00.514278 ignition[1261]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4198500804" Sep 13 00:45:00.514278 ignition[1261]: INFO : op(c): [started] unmounting "/mnt/oem4198500804" Sep 13 00:45:00.514278 ignition[1261]: INFO : op(c): [finished] unmounting "/mnt/oem4198500804" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: op(13): [started] processing unit "nvidia.service" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: op(13): [finished] processing unit "nvidia.service" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Sep 13 00:45:00.514278 ignition[1261]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:45:00.669026 kernel: kauditd_printk_skb: 26 callbacks suppressed Sep 13 00:45:00.669060 kernel: audit: type=1130 audit(1757724300.536:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.669080 kernel: audit: type=1130 audit(1757724300.582:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.669099 kernel: audit: type=1131 audit(1757724300.583:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.669139 kernel: audit: type=1130 audit(1757724300.614:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.669367 ignition[1261]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:45:00.669367 ignition[1261]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:45:00.669367 ignition[1261]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:45:00.669367 ignition[1261]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Sep 13 00:45:00.669367 ignition[1261]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Sep 13 00:45:00.669367 ignition[1261]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:45:00.669367 ignition[1261]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:45:00.669367 ignition[1261]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:45:00.669367 ignition[1261]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:45:00.669367 ignition[1261]: INFO : files: files passed Sep 13 00:45:00.669367 ignition[1261]: INFO : Ignition finished successfully Sep 13 00:45:00.710120 kernel: audit: type=1130 audit(1757724300.669:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.710156 kernel: audit: type=1131 audit(1757724300.669:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.710176 kernel: audit: type=1130 audit(1757724300.708:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.535583 systemd[1]: Finished ignition-files.service. Sep 13 00:45:00.558415 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:45:00.720811 initrd-setup-root-after-ignition[1286]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:45:00.560290 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:45:00.561492 systemd[1]: Starting ignition-quench.service... Sep 13 00:45:00.581100 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:45:00.581249 systemd[1]: Finished ignition-quench.service. Sep 13 00:45:00.586797 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:45:00.744913 kernel: audit: type=1130 audit(1757724300.731:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.744951 kernel: audit: type=1131 audit(1757724300.731:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.615000 systemd[1]: Reached target ignition-complete.target. Sep 13 00:45:00.627935 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:45:00.668988 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:45:00.669136 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:45:00.792728 kernel: audit: type=1131 audit(1757724300.775:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.670240 systemd[1]: Reached target initrd-fs.target. Sep 13 00:45:00.683788 systemd[1]: Reached target initrd.target. Sep 13 00:45:00.686997 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:45:00.688329 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:45:00.707592 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:45:00.710667 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:45:00.730888 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:45:00.731022 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:45:00.733372 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:45:00.750120 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:45:00.762137 systemd[1]: Stopped target timers.target. Sep 13 00:45:00.765945 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:45:00.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.766038 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:45:00.776217 systemd[1]: Stopped target initrd.target. Sep 13 00:45:00.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.796200 systemd[1]: Stopped target basic.target. Sep 13 00:45:00.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.804661 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:45:00.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.809503 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:45:00.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.821090 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:45:00.822462 systemd[1]: Stopped target remote-fs.target. Sep 13 00:45:00.823738 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:45:00.833304 systemd[1]: Stopped target sysinit.target. Sep 13 00:45:00.834604 systemd[1]: Stopped target local-fs.target. Sep 13 00:45:00.835865 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:45:00.845307 systemd[1]: Stopped target swap.target. Sep 13 00:45:00.846600 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:45:00.846699 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:45:00.857118 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:45:00.858307 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:45:00.858392 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:45:00.868747 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:45:00.868833 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:45:00.870005 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:45:00.938775 ignition[1299]: INFO : Ignition 2.14.0 Sep 13 00:45:00.938775 ignition[1299]: INFO : Stage: umount Sep 13 00:45:00.938775 ignition[1299]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:45:00.938775 ignition[1299]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:45:00.870079 systemd[1]: Stopped ignition-files.service. Sep 13 00:45:00.872300 systemd[1]: Stopping ignition-mount.service... Sep 13 00:45:00.874965 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:45:00.875948 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:45:00.876056 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:45:00.893291 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:45:00.893381 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:45:00.962869 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:45:00.976512 ignition[1299]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:45:00.976512 ignition[1299]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:45:00.986211 ignition[1299]: INFO : PUT result: OK Sep 13 00:45:00.990931 ignition[1299]: INFO : umount: umount passed Sep 13 00:45:00.991858 ignition[1299]: INFO : Ignition finished successfully Sep 13 00:45:00.992380 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:45:00.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.992540 systemd[1]: Stopped ignition-mount.service. Sep 13 00:45:00.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.995146 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:45:00.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:00.995222 systemd[1]: Stopped ignition-disks.service. Sep 13 00:45:00.996608 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:45:00.996752 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:45:00.998233 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:45:00.998339 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:45:01.000534 systemd[1]: Stopped target network.target. Sep 13 00:45:01.001081 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:45:01.001145 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:45:01.001730 systemd[1]: Stopped target paths.target. Sep 13 00:45:01.002308 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:45:01.007543 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:45:01.008547 systemd[1]: Stopped target slices.target. Sep 13 00:45:01.011042 systemd[1]: Stopped target sockets.target. Sep 13 00:45:01.014773 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:45:01.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.014845 systemd[1]: Closed iscsid.socket. Sep 13 00:45:01.016157 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:45:01.016199 systemd[1]: Closed iscsiuio.socket. Sep 13 00:45:01.017885 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:45:01.018220 systemd[1]: Stopped ignition-setup.service. Sep 13 00:45:01.020789 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:45:01.022343 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:45:01.025522 systemd-networkd[1103]: eth0: DHCPv6 lease lost Sep 13 00:45:01.031629 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:45:01.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.031857 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:45:01.036173 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:45:01.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.038000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:45:01.038000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:45:01.036325 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:45:01.038933 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:45:01.039035 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:45:01.041320 systemd[1]: Stopping network-cleanup.service... Sep 13 00:45:01.045480 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:45:01.045631 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:45:01.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.047927 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:45:01.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.048014 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:45:01.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.049809 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:45:01.049879 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:45:01.059061 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:45:01.064016 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:45:01.074810 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:45:01.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.074920 systemd[1]: Stopped network-cleanup.service. Sep 13 00:45:01.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.078034 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:45:01.078227 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:45:01.081104 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:45:01.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.081223 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:45:01.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.082439 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:45:01.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.082512 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:45:01.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.084134 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:45:01.084217 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:45:01.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.085911 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:45:01.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.085980 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:45:01.087477 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:45:01.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:01.087545 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:45:01.090376 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:45:01.097616 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:45:01.097882 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 00:45:01.100962 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:45:01.101210 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:45:01.102908 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:45:01.102981 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:45:01.107711 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 00:45:01.108754 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:45:01.109518 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:45:01.111545 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:45:01.111664 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:45:01.114750 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:45:01.116775 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:45:01.116862 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:45:01.120284 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:45:01.147608 systemd[1]: Switching root. Sep 13 00:45:01.200010 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Sep 13 00:45:01.200125 iscsid[1110]: iscsid shutting down. Sep 13 00:45:01.202741 systemd-journald[185]: Journal stopped Sep 13 00:45:06.838888 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:45:06.838965 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:45:06.838990 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:45:06.839013 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:45:06.839030 kernel: SELinux: policy capability open_perms=1 Sep 13 00:45:06.839048 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:45:06.839069 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:45:06.839089 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:45:06.839106 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:45:06.839123 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:45:06.839140 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:45:06.839159 systemd[1]: Successfully loaded SELinux policy in 118.320ms. Sep 13 00:45:06.839196 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.682ms. Sep 13 00:45:06.839216 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:45:06.839239 systemd[1]: Detected virtualization amazon. Sep 13 00:45:06.839258 systemd[1]: Detected architecture x86-64. Sep 13 00:45:06.839276 systemd[1]: Detected first boot. Sep 13 00:45:06.839295 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:45:06.839313 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:45:06.839331 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:45:06.839349 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:45:06.839369 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:45:06.839391 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:45:06.839414 kernel: kauditd_printk_skb: 47 callbacks suppressed Sep 13 00:45:06.839430 kernel: audit: type=1334 audit(1757724306.595:87): prog-id=12 op=LOAD Sep 13 00:45:06.839447 kernel: audit: type=1334 audit(1757724306.596:88): prog-id=3 op=UNLOAD Sep 13 00:45:06.839481 kernel: audit: type=1334 audit(1757724306.597:89): prog-id=13 op=LOAD Sep 13 00:45:06.839498 kernel: audit: type=1334 audit(1757724306.598:90): prog-id=14 op=LOAD Sep 13 00:45:06.839518 kernel: audit: type=1334 audit(1757724306.598:91): prog-id=4 op=UNLOAD Sep 13 00:45:06.839535 kernel: audit: type=1334 audit(1757724306.598:92): prog-id=5 op=UNLOAD Sep 13 00:45:06.839553 kernel: audit: type=1334 audit(1757724306.600:93): prog-id=15 op=LOAD Sep 13 00:45:06.839569 kernel: audit: type=1334 audit(1757724306.600:94): prog-id=12 op=UNLOAD Sep 13 00:45:06.839586 kernel: audit: type=1334 audit(1757724306.606:95): prog-id=16 op=LOAD Sep 13 00:45:06.839603 kernel: audit: type=1334 audit(1757724306.608:96): prog-id=17 op=LOAD Sep 13 00:45:06.839624 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:45:06.839642 systemd[1]: Stopped iscsiuio.service. Sep 13 00:45:06.839661 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:45:06.839682 systemd[1]: Stopped iscsid.service. Sep 13 00:45:06.839701 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:45:06.839720 systemd[1]: Stopped initrd-switch-root.service. Sep 13 00:45:06.839739 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:45:06.839758 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:45:06.841489 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:45:06.841539 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 00:45:06.841567 systemd[1]: Created slice system-getty.slice. Sep 13 00:45:06.841589 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:45:06.841609 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:45:06.841630 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:45:06.841650 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:45:06.841672 systemd[1]: Created slice user.slice. Sep 13 00:45:06.841693 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:45:06.841723 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:45:06.841754 systemd[1]: Set up automount boot.automount. Sep 13 00:45:06.841777 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:45:06.841797 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 00:45:06.841815 systemd[1]: Stopped target initrd-fs.target. Sep 13 00:45:06.841835 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 00:45:06.841855 systemd[1]: Reached target integritysetup.target. Sep 13 00:45:06.841877 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:45:06.841897 systemd[1]: Reached target remote-fs.target. Sep 13 00:45:06.841917 systemd[1]: Reached target slices.target. Sep 13 00:45:06.841937 systemd[1]: Reached target swap.target. Sep 13 00:45:06.841957 systemd[1]: Reached target torcx.target. Sep 13 00:45:06.841977 systemd[1]: Reached target veritysetup.target. Sep 13 00:45:06.841998 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:45:06.842017 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:45:06.842038 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:45:06.842057 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:45:06.842080 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:45:06.842100 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:45:06.842120 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:45:06.842140 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:45:06.842162 systemd[1]: Mounting media.mount... Sep 13 00:45:06.842183 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:45:06.842203 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:45:06.842222 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:45:06.842243 systemd[1]: Mounting tmp.mount... Sep 13 00:45:06.842266 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:45:06.842286 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:45:06.842306 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:45:06.842327 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:45:06.842347 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:45:06.842367 systemd[1]: Starting modprobe@drm.service... Sep 13 00:45:06.842388 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:45:06.842408 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:45:06.842428 systemd[1]: Starting modprobe@loop.service... Sep 13 00:45:06.842519 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:45:06.842542 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:45:06.842560 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 00:45:06.842579 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:45:06.842599 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:45:06.842619 systemd[1]: Stopped systemd-journald.service. Sep 13 00:45:06.842639 systemd[1]: Starting systemd-journald.service... Sep 13 00:45:06.842659 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:45:06.842678 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:45:06.842700 kernel: loop: module loaded Sep 13 00:45:06.842721 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:45:06.842742 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:45:06.842761 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:45:06.842780 systemd[1]: Stopped verity-setup.service. Sep 13 00:45:06.842800 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:45:06.842818 kernel: fuse: init (API version 7.34) Sep 13 00:45:06.842836 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:45:06.842854 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:45:06.842883 systemd-journald[1407]: Journal started Sep 13 00:45:06.842961 systemd-journald[1407]: Runtime Journal (/run/log/journal/ec241c0b269235c8842fd81c17c172c2) is 4.8M, max 38.3M, 33.5M free. Sep 13 00:45:02.083000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:45:02.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:45:02.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:45:02.284000 audit: BPF prog-id=10 op=LOAD Sep 13 00:45:02.284000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:45:02.284000 audit: BPF prog-id=11 op=LOAD Sep 13 00:45:02.284000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:45:02.550000 audit[1333]: AVC avc: denied { associate } for pid=1333 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:45:02.550000 audit[1333]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b4 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=1316 pid=1333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:45:02.550000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:45:02.553000 audit[1333]: AVC avc: denied { associate } for pid=1333 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:45:02.553000 audit[1333]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d999 a2=1ed a3=0 items=2 ppid=1316 pid=1333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:45:02.553000 audit: CWD cwd="/" Sep 13 00:45:02.553000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:02.553000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:02.553000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:45:06.595000 audit: BPF prog-id=12 op=LOAD Sep 13 00:45:06.596000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:45:06.597000 audit: BPF prog-id=13 op=LOAD Sep 13 00:45:06.598000 audit: BPF prog-id=14 op=LOAD Sep 13 00:45:06.598000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:45:06.598000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:45:06.600000 audit: BPF prog-id=15 op=LOAD Sep 13 00:45:06.600000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:45:06.606000 audit: BPF prog-id=16 op=LOAD Sep 13 00:45:06.608000 audit: BPF prog-id=17 op=LOAD Sep 13 00:45:06.608000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:45:06.608000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:45:06.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.614000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:45:06.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.774000 audit: BPF prog-id=18 op=LOAD Sep 13 00:45:06.774000 audit: BPF prog-id=19 op=LOAD Sep 13 00:45:06.775000 audit: BPF prog-id=20 op=LOAD Sep 13 00:45:06.775000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:45:06.775000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:45:06.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.836000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:45:06.836000 audit[1407]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe35e84410 a2=4000 a3=7ffe35e844ac items=0 ppid=1 pid=1407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:45:06.836000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:45:06.594583 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:45:02.531244 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:45:06.594596 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 13 00:45:02.532264 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:45:06.609825 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:45:02.532293 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:45:06.850511 systemd[1]: Started systemd-journald.service. Sep 13 00:45:06.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:02.532339 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 00:45:06.847006 systemd[1]: Mounted media.mount. Sep 13 00:45:02.532355 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 00:45:06.848688 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:45:02.532401 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 00:45:06.851055 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:45:02.532519 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 00:45:02.532783 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 00:45:02.532840 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:45:02.532860 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:45:02.541525 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 00:45:02.541583 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 00:45:02.541615 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 00:45:02.541639 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 00:45:02.541669 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 00:45:06.852540 systemd[1]: Mounted tmp.mount. Sep 13 00:45:02.541690 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 00:45:06.009316 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:06Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:45:06.854058 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:45:06.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.009578 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:06Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:45:06.009685 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:06Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:45:06.009885 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:06Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:45:06.009933 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:06Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 00:45:06.009990 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2025-09-13T00:45:06Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 00:45:06.856527 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:45:06.856712 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:45:06.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.860073 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:45:06.860250 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:45:06.861515 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:45:06.861689 systemd[1]: Finished modprobe@drm.service. Sep 13 00:45:06.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.863794 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:45:06.863963 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:45:06.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.866630 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:45:06.867540 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:45:06.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.869434 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:45:06.869827 systemd[1]: Finished modprobe@loop.service. Sep 13 00:45:06.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.872045 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:45:06.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.873427 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:45:06.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.875180 systemd[1]: Reached target network-pre.target. Sep 13 00:45:06.879923 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:45:06.883132 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:45:06.887974 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:45:06.900557 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:45:06.902968 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:45:06.903850 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:45:06.906028 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:45:06.907025 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:45:06.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.911098 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:45:06.912146 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:45:06.914136 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:45:06.917511 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:45:06.939639 systemd-journald[1407]: Time spent on flushing to /var/log/journal/ec241c0b269235c8842fd81c17c172c2 is 71.131ms for 1230 entries. Sep 13 00:45:06.939639 systemd-journald[1407]: System Journal (/var/log/journal/ec241c0b269235c8842fd81c17c172c2) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:45:07.037748 systemd-journald[1407]: Received client request to flush runtime journal. Sep 13 00:45:06.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:07.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:06.948234 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:45:06.949165 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:45:06.971267 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:45:06.975336 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:45:06.977823 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:45:07.027364 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:45:07.029745 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:45:07.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:07.041029 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:45:07.042825 udevadm[1452]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:45:07.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:07.187533 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:45:07.189238 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:45:07.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:07.310852 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:45:07.556295 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:45:07.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:07.557000 audit: BPF prog-id=21 op=LOAD Sep 13 00:45:07.557000 audit: BPF prog-id=22 op=LOAD Sep 13 00:45:07.557000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:45:07.557000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:45:07.558174 systemd[1]: Starting systemd-udevd.service... Sep 13 00:45:07.576714 systemd-udevd[1455]: Using default interface naming scheme 'v252'. Sep 13 00:45:07.634908 systemd[1]: Started systemd-udevd.service. Sep 13 00:45:07.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:07.636000 audit: BPF prog-id=23 op=LOAD Sep 13 00:45:07.640849 systemd[1]: Starting systemd-networkd.service... Sep 13 00:45:07.660000 audit: BPF prog-id=24 op=LOAD Sep 13 00:45:07.660000 audit: BPF prog-id=25 op=LOAD Sep 13 00:45:07.660000 audit: BPF prog-id=26 op=LOAD Sep 13 00:45:07.661385 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:45:07.668928 (udev-worker)[1463]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:45:07.675426 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 13 00:45:07.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:07.694043 systemd[1]: Started systemd-userdbd.service. Sep 13 00:45:07.760476 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:45:07.764000 audit[1469]: AVC avc: denied { confidentiality } for pid=1469 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:45:07.764000 audit[1469]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5557d33eab10 a1=338ec a2=7f6d5d4d1bc5 a3=5 items=110 ppid=1455 pid=1469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:45:07.764000 audit: CWD cwd="/" Sep 13 00:45:07.764000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=1 name=(null) inode=15447 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=2 name=(null) inode=15447 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=3 name=(null) inode=15448 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=4 name=(null) inode=15447 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=5 name=(null) inode=15449 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=6 name=(null) inode=15447 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=7 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=8 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=9 name=(null) inode=15451 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=10 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=11 name=(null) inode=15452 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=12 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=13 name=(null) inode=15453 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=14 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=15 name=(null) inode=15454 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=16 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=17 name=(null) inode=15455 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=18 name=(null) inode=15447 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=19 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=20 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=21 name=(null) inode=15457 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=22 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=23 name=(null) inode=15458 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=24 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=25 name=(null) inode=15459 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=26 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=27 name=(null) inode=15460 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=28 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=29 name=(null) inode=15461 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=30 name=(null) inode=15447 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=31 name=(null) inode=15462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=32 name=(null) inode=15462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=33 name=(null) inode=15463 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=34 name=(null) inode=15462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=35 name=(null) inode=15464 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=36 name=(null) inode=15462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=37 name=(null) inode=15465 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=38 name=(null) inode=15462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=39 name=(null) inode=15466 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=40 name=(null) inode=15462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=41 name=(null) inode=15467 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=42 name=(null) inode=15447 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=43 name=(null) inode=15468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=44 name=(null) inode=15468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=45 name=(null) inode=15469 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=46 name=(null) inode=15468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=47 name=(null) inode=15470 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=48 name=(null) inode=15468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=49 name=(null) inode=15471 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=50 name=(null) inode=15468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=51 name=(null) inode=15472 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=52 name=(null) inode=15468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=53 name=(null) inode=15473 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=55 name=(null) inode=15474 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=56 name=(null) inode=15474 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=57 name=(null) inode=15475 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=58 name=(null) inode=15474 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=59 name=(null) inode=15476 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=60 name=(null) inode=15474 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=61 name=(null) inode=15477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=62 name=(null) inode=15477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=63 name=(null) inode=15478 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=64 name=(null) inode=15477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=65 name=(null) inode=15479 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=66 name=(null) inode=15477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=67 name=(null) inode=15480 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=68 name=(null) inode=15477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=69 name=(null) inode=15481 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=70 name=(null) inode=15477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=71 name=(null) inode=15482 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=72 name=(null) inode=15474 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=73 name=(null) inode=15483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=74 name=(null) inode=15483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=75 name=(null) inode=15484 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=76 name=(null) inode=15483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=77 name=(null) inode=15485 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=78 name=(null) inode=15483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=79 name=(null) inode=15486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=80 name=(null) inode=15483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=81 name=(null) inode=15487 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=82 name=(null) inode=15483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=83 name=(null) inode=15488 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=84 name=(null) inode=15474 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=85 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=86 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=87 name=(null) inode=15490 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=88 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=89 name=(null) inode=15491 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=90 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=91 name=(null) inode=15492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=92 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=93 name=(null) inode=15493 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=94 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=95 name=(null) inode=15494 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=96 name=(null) inode=15474 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=97 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=98 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=99 name=(null) inode=15496 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=100 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=101 name=(null) inode=15497 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=102 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=103 name=(null) inode=15498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=104 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=105 name=(null) inode=15499 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=106 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=107 name=(null) inode=15500 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PATH item=109 name=(null) inode=15501 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:45:07.764000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:45:07.784015 systemd-networkd[1470]: lo: Link UP Sep 13 00:45:07.784026 systemd-networkd[1470]: lo: Gained carrier Sep 13 00:45:07.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:07.784856 systemd-networkd[1470]: Enumeration completed Sep 13 00:45:07.784951 systemd[1]: Started systemd-networkd.service. Sep 13 00:45:07.786599 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:45:07.787725 systemd-networkd[1470]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:45:07.791906 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:45:07.791577 systemd-networkd[1470]: eth0: Link UP Sep 13 00:45:07.791721 systemd-networkd[1470]: eth0: Gained carrier Sep 13 00:45:07.802774 systemd-networkd[1470]: eth0: DHCPv4 address 172.31.31.206/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:45:07.803483 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:45:07.806493 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 13 00:45:07.806738 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 13 00:45:07.807472 kernel: ACPI: button: Sleep Button [SLPF] Sep 13 00:45:07.821480 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:45:07.827480 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:45:07.928952 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:45:07.935964 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:45:07.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:07.938185 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:45:07.987612 lvm[1569]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:45:08.015803 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:45:08.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.016569 systemd[1]: Reached target cryptsetup.target. Sep 13 00:45:08.018443 systemd[1]: Starting lvm2-activation.service... Sep 13 00:45:08.023836 lvm[1570]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:45:08.051847 systemd[1]: Finished lvm2-activation.service. Sep 13 00:45:08.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.052600 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:45:08.053076 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:45:08.053107 systemd[1]: Reached target local-fs.target. Sep 13 00:45:08.053555 systemd[1]: Reached target machines.target. Sep 13 00:45:08.055210 systemd[1]: Starting ldconfig.service... Sep 13 00:45:08.056719 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:45:08.056803 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:45:08.058100 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:45:08.059605 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:45:08.061566 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:45:08.063300 systemd[1]: Starting systemd-sysext.service... Sep 13 00:45:08.071016 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1572 (bootctl) Sep 13 00:45:08.072290 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:45:08.076853 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:45:08.081923 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:45:08.082109 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:45:08.095470 kernel: loop0: detected capacity change from 0 to 229808 Sep 13 00:45:08.109960 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:45:08.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.289536 systemd-fsck[1584]: fsck.fat 4.2 (2021-01-31) Sep 13 00:45:08.289536 systemd-fsck[1584]: /dev/nvme0n1p1: 790 files, 120761/258078 clusters Sep 13 00:45:08.291399 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:45:08.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.293504 systemd[1]: Mounting boot.mount... Sep 13 00:45:08.323183 systemd[1]: Mounted boot.mount. Sep 13 00:45:08.354965 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:45:08.355659 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:45:08.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.358331 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:45:08.411485 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:45:08.428531 kernel: loop1: detected capacity change from 0 to 229808 Sep 13 00:45:08.448319 (sd-sysext)[1599]: Using extensions 'kubernetes'. Sep 13 00:45:08.448870 (sd-sysext)[1599]: Merged extensions into '/usr'. Sep 13 00:45:08.467098 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:45:08.468876 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:45:08.469835 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:45:08.471194 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:45:08.473019 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:45:08.474694 systemd[1]: Starting modprobe@loop.service... Sep 13 00:45:08.475227 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:45:08.475366 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:45:08.475529 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:45:08.476325 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:45:08.476760 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:45:08.479415 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:45:08.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.480582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:45:08.480708 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:45:08.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.481908 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:45:08.482039 systemd[1]: Finished modprobe@loop.service. Sep 13 00:45:08.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.482895 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:45:08.483006 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:45:08.484286 systemd[1]: Finished systemd-sysext.service. Sep 13 00:45:08.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.486095 systemd[1]: Starting ensure-sysext.service... Sep 13 00:45:08.487673 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:45:08.493321 systemd[1]: Reloading. Sep 13 00:45:08.502238 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:45:08.505139 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:45:08.507411 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:45:08.554361 /usr/lib/systemd/system-generators/torcx-generator[1626]: time="2025-09-13T00:45:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:45:08.554399 /usr/lib/systemd/system-generators/torcx-generator[1626]: time="2025-09-13T00:45:08Z" level=info msg="torcx already run" Sep 13 00:45:08.697959 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:45:08.698249 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:45:08.727824 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:45:08.805000 audit: BPF prog-id=27 op=LOAD Sep 13 00:45:08.805000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:45:08.805000 audit: BPF prog-id=28 op=LOAD Sep 13 00:45:08.805000 audit: BPF prog-id=29 op=LOAD Sep 13 00:45:08.805000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:45:08.805000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:45:08.808000 audit: BPF prog-id=30 op=LOAD Sep 13 00:45:08.808000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:45:08.809000 audit: BPF prog-id=31 op=LOAD Sep 13 00:45:08.809000 audit: BPF prog-id=32 op=LOAD Sep 13 00:45:08.810000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:45:08.810000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:45:08.810000 audit: BPF prog-id=33 op=LOAD Sep 13 00:45:08.810000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:45:08.811000 audit: BPF prog-id=34 op=LOAD Sep 13 00:45:08.811000 audit: BPF prog-id=35 op=LOAD Sep 13 00:45:08.811000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:45:08.811000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:45:08.816006 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:45:08.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.822954 systemd[1]: Starting audit-rules.service... Sep 13 00:45:08.825537 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:45:08.834000 audit: BPF prog-id=36 op=LOAD Sep 13 00:45:08.828354 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:45:08.835820 systemd[1]: Starting systemd-resolved.service... Sep 13 00:45:08.839000 audit: BPF prog-id=37 op=LOAD Sep 13 00:45:08.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.841061 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:45:08.843501 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:45:08.845118 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:45:08.846133 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:45:08.855619 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:45:08.859511 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:45:08.863884 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:45:08.866327 systemd[1]: Starting modprobe@loop.service... Sep 13 00:45:08.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.870629 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:45:08.870857 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:45:08.871023 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:45:08.872442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:45:08.872651 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:45:08.873923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:45:08.874111 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:45:08.875332 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:45:08.875507 systemd[1]: Finished modprobe@loop.service. Sep 13 00:45:08.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.882000 audit[1688]: SYSTEM_BOOT pid=1688 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.878432 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:45:08.878618 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:45:08.882921 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:45:08.884837 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:45:08.889154 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:45:08.891424 systemd[1]: Starting modprobe@loop.service... Sep 13 00:45:08.892129 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:45:08.892349 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:45:08.892557 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:45:08.893768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:45:08.893979 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:45:08.908126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:45:08.908303 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:45:08.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.917050 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:45:08.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.918357 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:45:08.918584 systemd[1]: Finished modprobe@loop.service. Sep 13 00:45:08.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.922766 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:45:08.925394 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:45:08.928430 systemd[1]: Starting modprobe@drm.service... Sep 13 00:45:08.932836 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:45:08.933641 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:45:08.933754 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:45:08.933885 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:45:08.934696 systemd[1]: Finished ensure-sysext.service. Sep 13 00:45:08.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.935652 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:45:08.935815 systemd[1]: Finished modprobe@drm.service. Sep 13 00:45:08.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.940852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:45:08.941031 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:45:08.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.941779 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:45:08.951629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:45:08.951810 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:45:08.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:08.952691 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:45:08.963163 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:45:08.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:45:09.012468 augenrules[1711]: No rules Sep 13 00:45:09.011000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:45:09.011000 audit[1711]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff64193760 a2=420 a3=0 items=0 ppid=1682 pid=1711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:45:09.011000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:45:09.012822 systemd[1]: Finished audit-rules.service. Sep 13 00:45:09.035948 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:45:09.036770 systemd[1]: Reached target time-set.target. Sep 13 00:45:09.040985 systemd-resolved[1686]: Positive Trust Anchors: Sep 13 00:45:09.041003 systemd-resolved[1686]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:45:09.041046 systemd-resolved[1686]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:45:09.082605 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:45:09.082626 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:45:09.088027 systemd-resolved[1686]: Defaulting to hostname 'linux'. Sep 13 00:45:09.089817 systemd[1]: Started systemd-resolved.service. Sep 13 00:45:09.090311 systemd[1]: Reached target network.target. Sep 13 00:45:09.090659 systemd[1]: Reached target nss-lookup.target. Sep 13 00:45:10.177240 systemd-resolved[1686]: Clock change detected. Flushing caches. Sep 13 00:45:10.177255 systemd-timesyncd[1687]: Contacted time server 24.229.44.105:123 (0.flatcar.pool.ntp.org). Sep 13 00:45:10.177344 systemd-timesyncd[1687]: Initial clock synchronization to Sat 2025-09-13 00:45:10.177057 UTC. Sep 13 00:45:10.275467 ldconfig[1571]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:45:10.282428 systemd[1]: Finished ldconfig.service. Sep 13 00:45:10.284170 systemd[1]: Starting systemd-update-done.service... Sep 13 00:45:10.293248 systemd[1]: Finished systemd-update-done.service. Sep 13 00:45:10.293913 systemd[1]: Reached target sysinit.target. Sep 13 00:45:10.294367 systemd[1]: Started motdgen.path. Sep 13 00:45:10.294740 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:45:10.295211 systemd[1]: Started logrotate.timer. Sep 13 00:45:10.295626 systemd[1]: Started mdadm.timer. Sep 13 00:45:10.295938 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:45:10.296237 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:45:10.296267 systemd[1]: Reached target paths.target. Sep 13 00:45:10.296553 systemd[1]: Reached target timers.target. Sep 13 00:45:10.297213 systemd[1]: Listening on dbus.socket. Sep 13 00:45:10.298634 systemd[1]: Starting docker.socket... Sep 13 00:45:10.302586 systemd[1]: Listening on sshd.socket. Sep 13 00:45:10.303094 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:45:10.303631 systemd[1]: Listening on docker.socket. Sep 13 00:45:10.304034 systemd[1]: Reached target sockets.target. Sep 13 00:45:10.304345 systemd[1]: Reached target basic.target. Sep 13 00:45:10.304940 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:45:10.304981 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:45:10.306400 systemd[1]: Starting containerd.service... Sep 13 00:45:10.308407 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 00:45:10.310830 systemd[1]: Starting dbus.service... Sep 13 00:45:10.313127 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:45:10.315034 systemd[1]: Starting extend-filesystems.service... Sep 13 00:45:10.315697 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:45:10.319815 systemd[1]: Starting motdgen.service... Sep 13 00:45:10.322112 systemd[1]: Starting prepare-helm.service... Sep 13 00:45:10.324449 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:45:10.329031 systemd[1]: Starting sshd-keygen.service... Sep 13 00:45:10.333131 systemd[1]: Starting systemd-logind.service... Sep 13 00:45:10.349300 jq[1723]: false Sep 13 00:45:10.334019 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:45:10.334122 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:45:10.334780 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:45:10.337852 systemd[1]: Starting update-engine.service... Sep 13 00:45:10.340009 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:45:10.349734 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:45:10.349997 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:45:10.374363 jq[1733]: true Sep 13 00:45:10.370024 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:45:10.370262 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:45:10.390907 jq[1741]: true Sep 13 00:45:10.409095 tar[1735]: linux-amd64/LICENSE Sep 13 00:45:10.410191 tar[1735]: linux-amd64/helm Sep 13 00:45:10.423372 extend-filesystems[1724]: Found loop1 Sep 13 00:45:10.423372 extend-filesystems[1724]: Found nvme0n1 Sep 13 00:45:10.423372 extend-filesystems[1724]: Found nvme0n1p1 Sep 13 00:45:10.423372 extend-filesystems[1724]: Found nvme0n1p2 Sep 13 00:45:10.423372 extend-filesystems[1724]: Found nvme0n1p3 Sep 13 00:45:10.423372 extend-filesystems[1724]: Found usr Sep 13 00:45:10.423372 extend-filesystems[1724]: Found nvme0n1p4 Sep 13 00:45:10.423372 extend-filesystems[1724]: Found nvme0n1p6 Sep 13 00:45:10.423372 extend-filesystems[1724]: Found nvme0n1p7 Sep 13 00:45:10.423372 extend-filesystems[1724]: Found nvme0n1p9 Sep 13 00:45:10.423372 extend-filesystems[1724]: Checking size of /dev/nvme0n1p9 Sep 13 00:45:10.450888 dbus-daemon[1722]: [system] SELinux support is enabled Sep 13 00:45:10.451115 systemd[1]: Started dbus.service. Sep 13 00:45:10.474773 extend-filesystems[1724]: Resized partition /dev/nvme0n1p9 Sep 13 00:45:10.471285 dbus-daemon[1722]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1470 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:45:10.454886 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:45:10.475470 dbus-daemon[1722]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:45:10.454924 systemd[1]: Reached target system-config.target. Sep 13 00:45:10.455550 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:45:10.455575 systemd[1]: Reached target user-config.target. Sep 13 00:45:10.458132 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:45:10.458344 systemd[1]: Finished motdgen.service. Sep 13 00:45:10.480337 systemd[1]: Starting systemd-hostnamed.service... Sep 13 00:45:10.491112 extend-filesystems[1771]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:45:10.504011 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 13 00:45:10.620213 env[1736]: time="2025-09-13T00:45:10.620155566Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:45:10.623636 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 13 00:45:10.632138 update_engine[1732]: I0913 00:45:10.631322 1732 main.cc:92] Flatcar Update Engine starting Sep 13 00:45:10.639797 systemd-networkd[1470]: eth0: Gained IPv6LL Sep 13 00:45:10.642866 extend-filesystems[1771]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 13 00:45:10.642866 extend-filesystems[1771]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:45:10.642866 extend-filesystems[1771]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 13 00:45:10.643301 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:45:10.648585 extend-filesystems[1724]: Resized filesystem in /dev/nvme0n1p9 Sep 13 00:45:10.645279 systemd[1]: Reached target network-online.target. Sep 13 00:45:10.648578 systemd[1]: Started amazon-ssm-agent.service. Sep 13 00:45:10.654250 update_engine[1732]: I0913 00:45:10.654217 1732 update_check_scheduler.cc:74] Next update check in 7m13s Sep 13 00:45:10.654352 bash[1776]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:45:10.679055 systemd[1]: Starting kubelet.service... Sep 13 00:45:10.681525 systemd[1]: Started nvidia.service. Sep 13 00:45:10.694228 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:45:10.694482 systemd[1]: Finished extend-filesystems.service. Sep 13 00:45:10.696079 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:45:10.705440 systemd[1]: Started update-engine.service. Sep 13 00:45:10.710014 systemd[1]: Started locksmithd.service. Sep 13 00:45:10.718052 dbus-daemon[1722]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:45:10.718282 systemd[1]: Started systemd-hostnamed.service. Sep 13 00:45:10.719666 dbus-daemon[1722]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1775 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:45:10.725188 systemd[1]: Starting polkit.service... Sep 13 00:45:10.738987 systemd-logind[1731]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:45:10.739019 systemd-logind[1731]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 00:45:10.739043 systemd-logind[1731]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:45:10.739262 systemd-logind[1731]: New seat seat0. Sep 13 00:45:10.745964 systemd[1]: Started systemd-logind.service. Sep 13 00:45:10.825179 polkitd[1807]: Started polkitd version 121 Sep 13 00:45:10.896276 polkitd[1807]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:45:10.896354 polkitd[1807]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:45:10.898990 polkitd[1807]: Finished loading, compiling and executing 2 rules Sep 13 00:45:10.900822 dbus-daemon[1722]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:45:10.901012 systemd[1]: Started polkit.service. Sep 13 00:45:10.903197 polkitd[1807]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:45:10.962088 amazon-ssm-agent[1796]: 2025/09/13 00:45:10 Failed to load instance info from vault. RegistrationKey does not exist. Sep 13 00:45:10.962088 amazon-ssm-agent[1796]: Initializing new seelog logger Sep 13 00:45:10.962088 amazon-ssm-agent[1796]: New Seelog Logger Creation Complete Sep 13 00:45:10.962088 amazon-ssm-agent[1796]: 2025/09/13 00:45:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:45:10.962088 amazon-ssm-agent[1796]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:45:10.962088 amazon-ssm-agent[1796]: 2025/09/13 00:45:10 processing appconfig overrides Sep 13 00:45:10.963479 env[1736]: time="2025-09-13T00:45:10.962843267Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:45:10.963479 env[1736]: time="2025-09-13T00:45:10.963081889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:45:10.987005 systemd-hostnamed[1775]: Hostname set to (transient) Sep 13 00:45:10.988690 env[1736]: time="2025-09-13T00:45:10.988636886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:45:10.988794 env[1736]: time="2025-09-13T00:45:10.988689242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:45:10.989055 env[1736]: time="2025-09-13T00:45:10.989021847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:45:10.989418 env[1736]: time="2025-09-13T00:45:10.989057145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:45:10.989418 env[1736]: time="2025-09-13T00:45:10.989078373Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:45:10.989418 env[1736]: time="2025-09-13T00:45:10.989093511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:45:10.989418 env[1736]: time="2025-09-13T00:45:10.989200745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:45:10.989588 env[1736]: time="2025-09-13T00:45:10.989470836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:45:10.991907 env[1736]: time="2025-09-13T00:45:10.991862123Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:45:10.991907 env[1736]: time="2025-09-13T00:45:10.991906728Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:45:10.992031 env[1736]: time="2025-09-13T00:45:10.991995795Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:45:10.992031 env[1736]: time="2025-09-13T00:45:10.992014570Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:45:11.005125 systemd-resolved[1686]: System hostname changed to 'ip-172-31-31-206'. Sep 13 00:45:11.013171 env[1736]: time="2025-09-13T00:45:11.013120500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:45:11.013318 env[1736]: time="2025-09-13T00:45:11.013205354Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:45:11.013318 env[1736]: time="2025-09-13T00:45:11.013225557Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:45:11.013318 env[1736]: time="2025-09-13T00:45:11.013280882Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:45:11.013318 env[1736]: time="2025-09-13T00:45:11.013303626Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:45:11.013474 env[1736]: time="2025-09-13T00:45:11.013372879Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:45:11.013474 env[1736]: time="2025-09-13T00:45:11.013394555Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:45:11.013474 env[1736]: time="2025-09-13T00:45:11.013416556Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:45:11.013474 env[1736]: time="2025-09-13T00:45:11.013437183Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:45:11.013474 env[1736]: time="2025-09-13T00:45:11.013457946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:45:11.013684 env[1736]: time="2025-09-13T00:45:11.013488910Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:45:11.013684 env[1736]: time="2025-09-13T00:45:11.013509536Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:45:11.013761 env[1736]: time="2025-09-13T00:45:11.013723436Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:45:11.014624 env[1736]: time="2025-09-13T00:45:11.013871500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:45:11.014624 env[1736]: time="2025-09-13T00:45:11.014358543Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:45:11.014624 env[1736]: time="2025-09-13T00:45:11.014413156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:45:11.014624 env[1736]: time="2025-09-13T00:45:11.014433044Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:45:11.014624 env[1736]: time="2025-09-13T00:45:11.014586670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:45:11.014874 env[1736]: time="2025-09-13T00:45:11.014627814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:45:11.014874 env[1736]: time="2025-09-13T00:45:11.014647831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:45:11.014874 env[1736]: time="2025-09-13T00:45:11.014663293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:45:11.014874 env[1736]: time="2025-09-13T00:45:11.014679257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:45:11.014874 env[1736]: time="2025-09-13T00:45:11.014719044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:45:11.014874 env[1736]: time="2025-09-13T00:45:11.014736586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:45:11.014874 env[1736]: time="2025-09-13T00:45:11.014755374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:45:11.014874 env[1736]: time="2025-09-13T00:45:11.014792536Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:45:11.017410 env[1736]: time="2025-09-13T00:45:11.014990165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:45:11.017518 env[1736]: time="2025-09-13T00:45:11.017423751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:45:11.017518 env[1736]: time="2025-09-13T00:45:11.017448717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:45:11.017518 env[1736]: time="2025-09-13T00:45:11.017494715Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:45:11.017665 env[1736]: time="2025-09-13T00:45:11.017517575Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:45:11.017665 env[1736]: time="2025-09-13T00:45:11.017550246Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:45:11.017665 env[1736]: time="2025-09-13T00:45:11.017578150Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:45:11.017665 env[1736]: time="2025-09-13T00:45:11.017649063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:45:11.018458 env[1736]: time="2025-09-13T00:45:11.017988297Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:45:11.018458 env[1736]: time="2025-09-13T00:45:11.018099185Z" level=info msg="Connect containerd service" Sep 13 00:45:11.018458 env[1736]: time="2025-09-13T00:45:11.018166442Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:45:11.035640 env[1736]: time="2025-09-13T00:45:11.035561890Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:45:11.035839 env[1736]: time="2025-09-13T00:45:11.035798128Z" level=info msg="Start subscribing containerd event" Sep 13 00:45:11.035894 env[1736]: time="2025-09-13T00:45:11.035877517Z" level=info msg="Start recovering state" Sep 13 00:45:11.035990 env[1736]: time="2025-09-13T00:45:11.035974592Z" level=info msg="Start event monitor" Sep 13 00:45:11.036051 env[1736]: time="2025-09-13T00:45:11.036017004Z" level=info msg="Start snapshots syncer" Sep 13 00:45:11.036051 env[1736]: time="2025-09-13T00:45:11.036031931Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:45:11.036051 env[1736]: time="2025-09-13T00:45:11.036045173Z" level=info msg="Start streaming server" Sep 13 00:45:11.036632 env[1736]: time="2025-09-13T00:45:11.036587915Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:45:11.036735 env[1736]: time="2025-09-13T00:45:11.036670359Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:45:11.083039 systemd[1]: Started containerd.service. Sep 13 00:45:11.085042 env[1736]: time="2025-09-13T00:45:11.085000485Z" level=info msg="containerd successfully booted in 0.483123s" Sep 13 00:45:11.168836 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 00:45:11.301371 coreos-metadata[1721]: Sep 13 00:45:11.301 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:45:11.311391 coreos-metadata[1721]: Sep 13 00:45:11.307 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 13 00:45:11.311391 coreos-metadata[1721]: Sep 13 00:45:11.308 INFO Fetch successful Sep 13 00:45:11.311391 coreos-metadata[1721]: Sep 13 00:45:11.308 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 00:45:11.311391 coreos-metadata[1721]: Sep 13 00:45:11.309 INFO Fetch successful Sep 13 00:45:11.313338 unknown[1721]: wrote ssh authorized keys file for user: core Sep 13 00:45:11.336763 update-ssh-keys[1908]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:45:11.337313 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 00:45:11.510438 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO Create new startup processor Sep 13 00:45:11.511019 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [LongRunningPluginsManager] registered plugins: {} Sep 13 00:45:11.511116 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO Initializing bookkeeping folders Sep 13 00:45:11.511116 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO removing the completed state files Sep 13 00:45:11.511116 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO Initializing bookkeeping folders for long running plugins Sep 13 00:45:11.511116 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 13 00:45:11.511273 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO Initializing healthcheck folders for long running plugins Sep 13 00:45:11.511273 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO Initializing locations for inventory plugin Sep 13 00:45:11.511273 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO Initializing default location for custom inventory Sep 13 00:45:11.511273 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO Initializing default location for file inventory Sep 13 00:45:11.511425 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO Initializing default location for role inventory Sep 13 00:45:11.511425 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO Init the cloudwatchlogs publisher Sep 13 00:45:11.511425 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [instanceID=i-06a967c5c95eb19f8] Successfully loaded platform independent plugin aws:softwareInventory Sep 13 00:45:11.511425 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [instanceID=i-06a967c5c95eb19f8] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 13 00:45:11.511425 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [instanceID=i-06a967c5c95eb19f8] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 13 00:45:11.511425 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [instanceID=i-06a967c5c95eb19f8] Successfully loaded platform independent plugin aws:configureDocker Sep 13 00:45:11.511425 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [instanceID=i-06a967c5c95eb19f8] Successfully loaded platform independent plugin aws:runDockerAction Sep 13 00:45:11.511425 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [instanceID=i-06a967c5c95eb19f8] Successfully loaded platform independent plugin aws:refreshAssociation Sep 13 00:45:11.511425 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [instanceID=i-06a967c5c95eb19f8] Successfully loaded platform independent plugin aws:configurePackage Sep 13 00:45:11.511758 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [instanceID=i-06a967c5c95eb19f8] Successfully loaded platform independent plugin aws:downloadContent Sep 13 00:45:11.511758 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [instanceID=i-06a967c5c95eb19f8] Successfully loaded platform independent plugin aws:runDocument Sep 13 00:45:11.511758 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [instanceID=i-06a967c5c95eb19f8] Successfully loaded platform dependent plugin aws:runShellScript Sep 13 00:45:11.511758 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 13 00:45:11.511758 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO OS: linux, Arch: amd64 Sep 13 00:45:11.516939 amazon-ssm-agent[1796]: datastore file /var/lib/amazon/ssm/i-06a967c5c95eb19f8/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 13 00:45:11.609358 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessagingDeliveryService] Starting document processing engine... Sep 13 00:45:11.704479 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 13 00:45:11.798818 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 13 00:45:11.809561 tar[1735]: linux-amd64/README.md Sep 13 00:45:11.817162 systemd[1]: Finished prepare-helm.service. Sep 13 00:45:11.893394 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessagingDeliveryService] Starting message polling Sep 13 00:45:11.929737 locksmithd[1802]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:45:11.988002 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 13 00:45:12.017915 sshd_keygen[1749]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:45:12.043008 systemd[1]: Finished sshd-keygen.service. Sep 13 00:45:12.045882 systemd[1]: Starting issuegen.service... Sep 13 00:45:12.054495 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:45:12.054749 systemd[1]: Finished issuegen.service. Sep 13 00:45:12.057522 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:45:12.066516 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:45:12.068895 systemd[1]: Started getty@tty1.service. Sep 13 00:45:12.071240 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:45:12.072100 systemd[1]: Reached target getty.target. Sep 13 00:45:12.082928 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [instanceID=i-06a967c5c95eb19f8] Starting association polling Sep 13 00:45:12.178063 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 13 00:45:12.273326 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 13 00:45:12.368852 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 13 00:45:12.464638 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 13 00:45:12.560529 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 13 00:45:12.656568 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessageGatewayService] Starting session document processing engine... Sep 13 00:45:12.752894 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 13 00:45:12.849341 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 13 00:45:12.946065 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-06a967c5c95eb19f8, requestId: a31d9b1a-c82e-4c45-9ebb-2b799a3e86e8 Sep 13 00:45:13.042981 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [OfflineService] Starting document processing engine... Sep 13 00:45:13.139962 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [OfflineService] [EngineProcessor] Starting Sep 13 00:45:13.237211 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [OfflineService] [EngineProcessor] Initial processing Sep 13 00:45:13.334777 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [OfflineService] Starting message polling Sep 13 00:45:13.432456 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [OfflineService] Starting send replies to MDS Sep 13 00:45:13.530336 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 13 00:45:13.628412 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 13 00:45:13.726621 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [HealthCheck] HealthCheck reporting agent health. Sep 13 00:45:13.825061 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 13 00:45:13.923912 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessageGatewayService] listening reply. Sep 13 00:45:14.022761 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [StartupProcessor] Executing startup processor tasks Sep 13 00:45:14.121813 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 13 00:45:14.221136 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 13 00:45:14.320567 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 13 00:45:14.420179 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-06a967c5c95eb19f8?role=subscribe&stream=input Sep 13 00:45:14.520154 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-06a967c5c95eb19f8?role=subscribe&stream=input Sep 13 00:45:14.620288 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessageGatewayService] Starting receiving message from control channel Sep 13 00:45:14.720671 amazon-ssm-agent[1796]: 2025-09-13 00:45:11 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 13 00:45:15.079187 systemd[1]: Started kubelet.service. Sep 13 00:45:15.080189 systemd[1]: Reached target multi-user.target. Sep 13 00:45:15.082182 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:45:15.090207 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:45:15.090371 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:45:15.091209 systemd[1]: Startup finished in 581ms (kernel) + 7.187s (initrd) + 12.304s (userspace) = 20.073s. Sep 13 00:45:17.083490 kubelet[1932]: E0913 00:45:17.083441 1932 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:45:17.085749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:45:17.085883 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:45:17.086109 systemd[1]: kubelet.service: Consumed 1.155s CPU time. Sep 13 00:45:19.442632 systemd[1]: Created slice system-sshd.slice. Sep 13 00:45:19.443876 systemd[1]: Started sshd@0-172.31.31.206:22-147.75.109.163:55372.service. Sep 13 00:45:19.665013 sshd[1940]: Accepted publickey for core from 147.75.109.163 port 55372 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:45:19.667878 sshd[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:45:19.682107 systemd[1]: Created slice user-500.slice. Sep 13 00:45:19.684124 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:45:19.687571 systemd-logind[1731]: New session 1 of user core. Sep 13 00:45:19.697092 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:45:19.699063 systemd[1]: Starting user@500.service... Sep 13 00:45:19.703305 (systemd)[1943]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:45:19.801383 systemd[1943]: Queued start job for default target default.target. Sep 13 00:45:19.802086 systemd[1943]: Reached target paths.target. Sep 13 00:45:19.802113 systemd[1943]: Reached target sockets.target. Sep 13 00:45:19.802127 systemd[1943]: Reached target timers.target. Sep 13 00:45:19.802142 systemd[1943]: Reached target basic.target. Sep 13 00:45:19.802248 systemd[1]: Started user@500.service. Sep 13 00:45:19.803150 systemd[1]: Started session-1.scope. Sep 13 00:45:19.803596 systemd[1943]: Reached target default.target. Sep 13 00:45:19.803760 systemd[1943]: Startup finished in 93ms. Sep 13 00:45:19.946766 systemd[1]: Started sshd@1-172.31.31.206:22-147.75.109.163:41886.service. Sep 13 00:45:20.108112 sshd[1952]: Accepted publickey for core from 147.75.109.163 port 41886 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:45:20.109377 sshd[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:45:20.114013 systemd-logind[1731]: New session 2 of user core. Sep 13 00:45:20.115432 systemd[1]: Started session-2.scope. Sep 13 00:45:20.240526 sshd[1952]: pam_unix(sshd:session): session closed for user core Sep 13 00:45:20.243955 systemd[1]: sshd@1-172.31.31.206:22-147.75.109.163:41886.service: Deactivated successfully. Sep 13 00:45:20.244856 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:45:20.245716 systemd-logind[1731]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:45:20.246760 systemd-logind[1731]: Removed session 2. Sep 13 00:45:20.266154 systemd[1]: Started sshd@2-172.31.31.206:22-147.75.109.163:41900.service. Sep 13 00:45:20.429848 sshd[1958]: Accepted publickey for core from 147.75.109.163 port 41900 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:45:20.430728 sshd[1958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:45:20.435414 systemd-logind[1731]: New session 3 of user core. Sep 13 00:45:20.435902 systemd[1]: Started session-3.scope. Sep 13 00:45:20.558881 sshd[1958]: pam_unix(sshd:session): session closed for user core Sep 13 00:45:20.561736 systemd[1]: sshd@2-172.31.31.206:22-147.75.109.163:41900.service: Deactivated successfully. Sep 13 00:45:20.562400 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:45:20.562904 systemd-logind[1731]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:45:20.563638 systemd-logind[1731]: Removed session 3. Sep 13 00:45:20.583293 systemd[1]: Started sshd@3-172.31.31.206:22-147.75.109.163:41906.service. Sep 13 00:45:20.744286 sshd[1964]: Accepted publickey for core from 147.75.109.163 port 41906 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:45:20.746074 sshd[1964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:45:20.750589 systemd-logind[1731]: New session 4 of user core. Sep 13 00:45:20.751107 systemd[1]: Started session-4.scope. Sep 13 00:45:20.878366 sshd[1964]: pam_unix(sshd:session): session closed for user core Sep 13 00:45:20.880990 systemd[1]: sshd@3-172.31.31.206:22-147.75.109.163:41906.service: Deactivated successfully. Sep 13 00:45:20.881779 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:45:20.882286 systemd-logind[1731]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:45:20.883037 systemd-logind[1731]: Removed session 4. Sep 13 00:45:20.903810 systemd[1]: Started sshd@4-172.31.31.206:22-147.75.109.163:41914.service. Sep 13 00:45:21.065138 sshd[1970]: Accepted publickey for core from 147.75.109.163 port 41914 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:45:21.066926 sshd[1970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:45:21.072547 systemd[1]: Started session-5.scope. Sep 13 00:45:21.073239 systemd-logind[1731]: New session 5 of user core. Sep 13 00:45:21.200945 sudo[1973]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:45:21.201198 sudo[1973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:45:21.229054 systemd[1]: Starting docker.service... Sep 13 00:45:21.270752 env[1983]: time="2025-09-13T00:45:21.270711897Z" level=info msg="Starting up" Sep 13 00:45:21.272307 env[1983]: time="2025-09-13T00:45:21.272262514Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:45:21.272307 env[1983]: time="2025-09-13T00:45:21.272290607Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:45:21.272461 env[1983]: time="2025-09-13T00:45:21.272317780Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:45:21.272461 env[1983]: time="2025-09-13T00:45:21.272332379Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:45:21.274551 env[1983]: time="2025-09-13T00:45:21.274512392Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:45:21.274551 env[1983]: time="2025-09-13T00:45:21.274536194Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:45:21.274749 env[1983]: time="2025-09-13T00:45:21.274557899Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:45:21.274749 env[1983]: time="2025-09-13T00:45:21.274572392Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:45:21.283569 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2458436200-merged.mount: Deactivated successfully. Sep 13 00:45:21.324576 env[1983]: time="2025-09-13T00:45:21.323959763Z" level=info msg="Loading containers: start." Sep 13 00:45:21.523631 kernel: Initializing XFRM netlink socket Sep 13 00:45:21.584669 env[1983]: time="2025-09-13T00:45:21.584376479Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:45:21.585365 (udev-worker)[1993]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:45:21.676776 systemd-networkd[1470]: docker0: Link UP Sep 13 00:45:21.692071 env[1983]: time="2025-09-13T00:45:21.692036124Z" level=info msg="Loading containers: done." Sep 13 00:45:21.705858 env[1983]: time="2025-09-13T00:45:21.705810742Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:45:21.706034 env[1983]: time="2025-09-13T00:45:21.705988883Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:45:21.706097 env[1983]: time="2025-09-13T00:45:21.706077291Z" level=info msg="Daemon has completed initialization" Sep 13 00:45:21.721267 systemd[1]: Started docker.service. Sep 13 00:45:21.729675 env[1983]: time="2025-09-13T00:45:21.729499742Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:45:23.599315 env[1736]: time="2025-09-13T00:45:23.599187855Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 00:45:24.142346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864437367.mount: Deactivated successfully. Sep 13 00:45:26.185681 env[1736]: time="2025-09-13T00:45:26.185594129Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:26.188097 env[1736]: time="2025-09-13T00:45:26.188057027Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:26.190027 env[1736]: time="2025-09-13T00:45:26.189995107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:26.192340 env[1736]: time="2025-09-13T00:45:26.192297252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:26.193159 env[1736]: time="2025-09-13T00:45:26.193120440Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 00:45:26.194310 env[1736]: time="2025-09-13T00:45:26.194279275Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 00:45:27.336762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:45:27.336961 systemd[1]: Stopped kubelet.service. Sep 13 00:45:27.337010 systemd[1]: kubelet.service: Consumed 1.155s CPU time. Sep 13 00:45:27.338521 systemd[1]: Starting kubelet.service... Sep 13 00:45:27.576901 systemd[1]: Started kubelet.service. Sep 13 00:45:27.656961 kubelet[2107]: E0913 00:45:27.656833 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:45:27.662387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:45:27.662563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:45:28.407121 env[1736]: time="2025-09-13T00:45:28.407056862Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:28.409049 env[1736]: time="2025-09-13T00:45:28.409011527Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:28.411119 env[1736]: time="2025-09-13T00:45:28.411088455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:28.412710 env[1736]: time="2025-09-13T00:45:28.412683267Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:28.413654 env[1736]: time="2025-09-13T00:45:28.413622725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 00:45:28.414156 env[1736]: time="2025-09-13T00:45:28.414136253Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 00:45:30.253215 env[1736]: time="2025-09-13T00:45:30.253162512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:30.256432 env[1736]: time="2025-09-13T00:45:30.256393204Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:30.258866 env[1736]: time="2025-09-13T00:45:30.258827362Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:30.261528 env[1736]: time="2025-09-13T00:45:30.261482128Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:30.262519 env[1736]: time="2025-09-13T00:45:30.262474925Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 00:45:30.263251 env[1736]: time="2025-09-13T00:45:30.263223749Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:45:31.541514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1677336620.mount: Deactivated successfully. Sep 13 00:45:32.306266 env[1736]: time="2025-09-13T00:45:32.306207683Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:32.317595 env[1736]: time="2025-09-13T00:45:32.317540886Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:32.322571 env[1736]: time="2025-09-13T00:45:32.322521924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:32.327142 env[1736]: time="2025-09-13T00:45:32.327099984Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:32.327490 env[1736]: time="2025-09-13T00:45:32.327460241Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 00:45:32.328114 env[1736]: time="2025-09-13T00:45:32.328087862Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 00:45:32.777867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598878471.mount: Deactivated successfully. Sep 13 00:45:34.181333 amazon-ssm-agent[1796]: 2025-09-13 00:45:34 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 13 00:45:34.369756 env[1736]: time="2025-09-13T00:45:34.369702672Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:34.372848 env[1736]: time="2025-09-13T00:45:34.372811928Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:34.375543 env[1736]: time="2025-09-13T00:45:34.375501580Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:34.378000 env[1736]: time="2025-09-13T00:45:34.377959598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:34.378850 env[1736]: time="2025-09-13T00:45:34.378819684Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 00:45:34.379371 env[1736]: time="2025-09-13T00:45:34.379349161Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:45:34.828366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766143744.mount: Deactivated successfully. Sep 13 00:45:34.834022 env[1736]: time="2025-09-13T00:45:34.833968489Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:34.835954 env[1736]: time="2025-09-13T00:45:34.835911397Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:34.837513 env[1736]: time="2025-09-13T00:45:34.837478776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:34.839220 env[1736]: time="2025-09-13T00:45:34.839185076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:34.839792 env[1736]: time="2025-09-13T00:45:34.839761858Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:45:34.840279 env[1736]: time="2025-09-13T00:45:34.840258970Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 00:45:35.267864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818436847.mount: Deactivated successfully. Sep 13 00:45:37.778073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:45:37.778261 systemd[1]: Stopped kubelet.service. Sep 13 00:45:37.779758 systemd[1]: Starting kubelet.service... Sep 13 00:45:37.930555 env[1736]: time="2025-09-13T00:45:37.930495467Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:37.933752 env[1736]: time="2025-09-13T00:45:37.933171520Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:37.935722 env[1736]: time="2025-09-13T00:45:37.935691867Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:37.938118 env[1736]: time="2025-09-13T00:45:37.938085215Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:37.939034 env[1736]: time="2025-09-13T00:45:37.938813884Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 00:45:38.512391 amazon-ssm-agent[1796]: 2025-09-13 00:45:38 INFO [HealthCheck] HealthCheck reporting agent health. Sep 13 00:45:38.644232 systemd[1]: Started kubelet.service. Sep 13 00:45:38.703536 kubelet[2137]: E0913 00:45:38.703492 2137 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:45:38.706446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:45:38.706636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:45:41.020045 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:45:41.255307 systemd[1]: Stopped kubelet.service. Sep 13 00:45:41.258413 systemd[1]: Starting kubelet.service... Sep 13 00:45:41.294732 systemd[1]: Reloading. Sep 13 00:45:41.400941 /usr/lib/systemd/system-generators/torcx-generator[2171]: time="2025-09-13T00:45:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:45:41.400968 /usr/lib/systemd/system-generators/torcx-generator[2171]: time="2025-09-13T00:45:41Z" level=info msg="torcx already run" Sep 13 00:45:41.499529 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:45:41.499553 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:45:41.518105 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:45:41.618968 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:45:41.619071 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:45:41.619341 systemd[1]: Stopped kubelet.service. Sep 13 00:45:41.621776 systemd[1]: Starting kubelet.service... Sep 13 00:45:41.835369 systemd[1]: Started kubelet.service. Sep 13 00:45:41.887951 kubelet[2233]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:45:41.887951 kubelet[2233]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:45:41.887951 kubelet[2233]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:45:41.887951 kubelet[2233]: I0913 00:45:41.887701 2233 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:45:42.517378 kubelet[2233]: I0913 00:45:42.517329 2233 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:45:42.517378 kubelet[2233]: I0913 00:45:42.517364 2233 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:45:42.517703 kubelet[2233]: I0913 00:45:42.517681 2233 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:45:42.563287 kubelet[2233]: I0913 00:45:42.563250 2233 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:45:42.563879 kubelet[2233]: E0913 00:45:42.563852 2233 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.206:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:45:42.570378 kubelet[2233]: E0913 00:45:42.570317 2233 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:45:42.570378 kubelet[2233]: I0913 00:45:42.570359 2233 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:45:42.573766 kubelet[2233]: I0913 00:45:42.573734 2233 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:45:42.574069 kubelet[2233]: I0913 00:45:42.574014 2233 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:45:42.574219 kubelet[2233]: I0913 00:45:42.574041 2233 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-206","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:45:42.574320 kubelet[2233]: I0913 00:45:42.574220 2233 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:45:42.574320 kubelet[2233]: I0913 00:45:42.574230 2233 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:45:42.575708 kubelet[2233]: I0913 00:45:42.575556 2233 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:45:42.579799 kubelet[2233]: I0913 00:45:42.579774 2233 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:45:42.579899 kubelet[2233]: I0913 00:45:42.579814 2233 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:45:42.579899 kubelet[2233]: I0913 00:45:42.579847 2233 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:45:42.579899 kubelet[2233]: I0913 00:45:42.579860 2233 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:45:42.596504 kubelet[2233]: E0913 00:45:42.596450 2233 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-206&limit=500&resourceVersion=0\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:45:42.601487 kubelet[2233]: E0913 00:45:42.601442 2233 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:45:42.602454 kubelet[2233]: I0913 00:45:42.601874 2233 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:45:42.602520 kubelet[2233]: I0913 00:45:42.602461 2233 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:45:42.603660 kubelet[2233]: W0913 00:45:42.603637 2233 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:45:42.607291 kubelet[2233]: I0913 00:45:42.607270 2233 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:45:42.607400 kubelet[2233]: I0913 00:45:42.607342 2233 server.go:1289] "Started kubelet" Sep 13 00:45:42.632092 kubelet[2233]: I0913 00:45:42.632039 2233 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:45:42.635242 kubelet[2233]: I0913 00:45:42.634868 2233 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:45:42.637433 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:45:42.637647 kubelet[2233]: I0913 00:45:42.637627 2233 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:45:42.643418 kubelet[2233]: I0913 00:45:42.643365 2233 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:45:42.643774 kubelet[2233]: I0913 00:45:42.643760 2233 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:45:42.643880 kubelet[2233]: I0913 00:45:42.643413 2233 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:45:42.645853 kubelet[2233]: I0913 00:45:42.644961 2233 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:45:42.645853 kubelet[2233]: E0913 00:45:42.645284 2233 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-206\" not found" Sep 13 00:45:42.645853 kubelet[2233]: I0913 00:45:42.645648 2233 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:45:42.645853 kubelet[2233]: I0913 00:45:42.645692 2233 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:45:42.646446 kubelet[2233]: E0913 00:45:42.646346 2233 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:45:42.646446 kubelet[2233]: E0913 00:45:42.646428 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-206?timeout=10s\": dial tcp 172.31.31.206:6443: connect: connection refused" interval="200ms" Sep 13 00:45:42.646716 kubelet[2233]: E0913 00:45:42.644317 2233 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.206:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.206:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-206.1864b0fa0e2a2135 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-206,UID:ip-172-31-31-206,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-206,},FirstTimestamp:2025-09-13 00:45:42.607290677 +0000 UTC m=+0.767489164,LastTimestamp:2025-09-13 00:45:42.607290677 +0000 UTC m=+0.767489164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-206,}" Sep 13 00:45:42.647904 kubelet[2233]: I0913 00:45:42.647889 2233 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:45:42.648055 kubelet[2233]: I0913 00:45:42.648040 2233 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:45:42.649195 kubelet[2233]: I0913 00:45:42.649140 2233 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:45:42.649710 kubelet[2233]: I0913 00:45:42.649698 2233 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:45:42.667427 kubelet[2233]: I0913 00:45:42.667398 2233 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:45:42.667648 kubelet[2233]: I0913 00:45:42.667585 2233 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:45:42.667648 kubelet[2233]: I0913 00:45:42.667634 2233 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:45:42.667648 kubelet[2233]: I0913 00:45:42.667642 2233 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:45:42.667794 kubelet[2233]: E0913 00:45:42.667687 2233 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:45:42.672268 kubelet[2233]: E0913 00:45:42.672232 2233 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:45:42.672710 kubelet[2233]: E0913 00:45:42.672682 2233 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:45:42.676663 kubelet[2233]: I0913 00:45:42.676642 2233 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:45:42.676663 kubelet[2233]: I0913 00:45:42.676656 2233 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:45:42.676810 kubelet[2233]: I0913 00:45:42.676672 2233 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:45:42.681747 kubelet[2233]: I0913 00:45:42.681719 2233 policy_none.go:49] "None policy: Start" Sep 13 00:45:42.681747 kubelet[2233]: I0913 00:45:42.681746 2233 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:45:42.681874 kubelet[2233]: I0913 00:45:42.681758 2233 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:45:42.688765 systemd[1]: Created slice kubepods.slice. Sep 13 00:45:42.693145 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 00:45:42.696128 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 00:45:42.703420 kubelet[2233]: E0913 00:45:42.703397 2233 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:45:42.703727 kubelet[2233]: I0913 00:45:42.703714 2233 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:45:42.703848 kubelet[2233]: I0913 00:45:42.703818 2233 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:45:42.706868 kubelet[2233]: I0913 00:45:42.706852 2233 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:45:42.707259 kubelet[2233]: E0913 00:45:42.706994 2233 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:45:42.707432 kubelet[2233]: E0913 00:45:42.707415 2233 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-206\" not found" Sep 13 00:45:42.794680 systemd[1]: Created slice kubepods-burstable-podacf327c39b6061201c72e92f01afaff4.slice. Sep 13 00:45:42.801725 kubelet[2233]: E0913 00:45:42.801703 2233 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-206\" not found" node="ip-172-31-31-206" Sep 13 00:45:42.805085 systemd[1]: Created slice kubepods-burstable-pod6982ed580ad99cd1d29c0a15e8b0c715.slice. Sep 13 00:45:42.806894 kubelet[2233]: I0913 00:45:42.806871 2233 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-206" Sep 13 00:45:42.807492 kubelet[2233]: E0913 00:45:42.807472 2233 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.206:6443/api/v1/nodes\": dial tcp 172.31.31.206:6443: connect: connection refused" node="ip-172-31-31-206" Sep 13 00:45:42.807738 kubelet[2233]: E0913 00:45:42.807726 2233 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-206\" not found" node="ip-172-31-31-206" Sep 13 00:45:42.813912 systemd[1]: Created slice kubepods-burstable-pode4f915ff71b2b421220936b59d1198ed.slice. Sep 13 00:45:42.815689 kubelet[2233]: E0913 00:45:42.815665 2233 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-206\" not found" node="ip-172-31-31-206" Sep 13 00:45:42.847483 kubelet[2233]: E0913 00:45:42.847436 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-206?timeout=10s\": dial tcp 172.31.31.206:6443: connect: connection refused" interval="400ms" Sep 13 00:45:42.946916 kubelet[2233]: I0913 00:45:42.946874 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/acf327c39b6061201c72e92f01afaff4-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-206\" (UID: \"acf327c39b6061201c72e92f01afaff4\") " pod="kube-system/kube-scheduler-ip-172-31-31-206" Sep 13 00:45:42.946916 kubelet[2233]: I0913 00:45:42.946912 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6982ed580ad99cd1d29c0a15e8b0c715-ca-certs\") pod \"kube-apiserver-ip-172-31-31-206\" (UID: \"6982ed580ad99cd1d29c0a15e8b0c715\") " pod="kube-system/kube-apiserver-ip-172-31-31-206" Sep 13 00:45:42.946916 kubelet[2233]: I0913 00:45:42.946930 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4f915ff71b2b421220936b59d1198ed-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-206\" (UID: \"e4f915ff71b2b421220936b59d1198ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-206" Sep 13 00:45:42.947415 kubelet[2233]: I0913 00:45:42.946957 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4f915ff71b2b421220936b59d1198ed-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-206\" (UID: \"e4f915ff71b2b421220936b59d1198ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-206" Sep 13 00:45:42.947415 kubelet[2233]: I0913 00:45:42.946977 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6982ed580ad99cd1d29c0a15e8b0c715-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-206\" (UID: \"6982ed580ad99cd1d29c0a15e8b0c715\") " pod="kube-system/kube-apiserver-ip-172-31-31-206" Sep 13 00:45:42.947415 kubelet[2233]: I0913 00:45:42.946993 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6982ed580ad99cd1d29c0a15e8b0c715-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-206\" (UID: \"6982ed580ad99cd1d29c0a15e8b0c715\") " pod="kube-system/kube-apiserver-ip-172-31-31-206" Sep 13 00:45:42.947415 kubelet[2233]: I0913 00:45:42.947009 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e4f915ff71b2b421220936b59d1198ed-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-206\" (UID: \"e4f915ff71b2b421220936b59d1198ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-206" Sep 13 00:45:42.947415 kubelet[2233]: I0913 00:45:42.947026 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4f915ff71b2b421220936b59d1198ed-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-206\" (UID: \"e4f915ff71b2b421220936b59d1198ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-206" Sep 13 00:45:42.947552 kubelet[2233]: I0913 00:45:42.947050 2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4f915ff71b2b421220936b59d1198ed-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-206\" (UID: \"e4f915ff71b2b421220936b59d1198ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-206" Sep 13 00:45:43.010052 kubelet[2233]: I0913 00:45:43.010019 2233 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-206" Sep 13 00:45:43.010415 kubelet[2233]: E0913 00:45:43.010389 2233 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.206:6443/api/v1/nodes\": dial tcp 172.31.31.206:6443: connect: connection refused" node="ip-172-31-31-206" Sep 13 00:45:43.104383 env[1736]: time="2025-09-13T00:45:43.104275429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-206,Uid:acf327c39b6061201c72e92f01afaff4,Namespace:kube-system,Attempt:0,}" Sep 13 00:45:43.109000 env[1736]: time="2025-09-13T00:45:43.108952447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-206,Uid:6982ed580ad99cd1d29c0a15e8b0c715,Namespace:kube-system,Attempt:0,}" Sep 13 00:45:43.117472 env[1736]: time="2025-09-13T00:45:43.117415443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-206,Uid:e4f915ff71b2b421220936b59d1198ed,Namespace:kube-system,Attempt:0,}" Sep 13 00:45:43.248177 kubelet[2233]: E0913 00:45:43.248058 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-206?timeout=10s\": dial tcp 172.31.31.206:6443: connect: connection refused" interval="800ms" Sep 13 00:45:43.412208 kubelet[2233]: I0913 00:45:43.412116 2233 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-206" Sep 13 00:45:43.412622 kubelet[2233]: E0913 00:45:43.412583 2233 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.206:6443/api/v1/nodes\": dial tcp 172.31.31.206:6443: connect: connection refused" node="ip-172-31-31-206" Sep 13 00:45:43.468731 kubelet[2233]: E0913 00:45:43.468691 2233 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:45:43.528694 kubelet[2233]: E0913 00:45:43.528655 2233 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-206&limit=500&resourceVersion=0\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:45:43.540331 kubelet[2233]: E0913 00:45:43.540290 2233 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:45:43.560852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543755665.mount: Deactivated successfully. Sep 13 00:45:43.575592 env[1736]: time="2025-09-13T00:45:43.575546493Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:43.577773 env[1736]: time="2025-09-13T00:45:43.577725896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:43.583782 env[1736]: time="2025-09-13T00:45:43.583743050Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:43.586136 env[1736]: time="2025-09-13T00:45:43.586092493Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:43.587676 env[1736]: time="2025-09-13T00:45:43.587647297Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:43.590041 env[1736]: time="2025-09-13T00:45:43.590008549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:43.594348 env[1736]: time="2025-09-13T00:45:43.594307953Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:43.596408 env[1736]: time="2025-09-13T00:45:43.596365003Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:43.600509 env[1736]: time="2025-09-13T00:45:43.600455385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:43.602761 env[1736]: time="2025-09-13T00:45:43.602708027Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:43.604837 env[1736]: time="2025-09-13T00:45:43.604784763Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:43.606516 env[1736]: time="2025-09-13T00:45:43.606483219Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:45:43.632239 kubelet[2233]: E0913 00:45:43.630334 2233 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.206:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.206:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-206.1864b0fa0e2a2135 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-206,UID:ip-172-31-31-206,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-206,},FirstTimestamp:2025-09-13 00:45:42.607290677 +0000 UTC m=+0.767489164,LastTimestamp:2025-09-13 00:45:42.607290677 +0000 UTC m=+0.767489164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-206,}" Sep 13 00:45:43.643656 env[1736]: time="2025-09-13T00:45:43.643530840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:45:43.643656 env[1736]: time="2025-09-13T00:45:43.643580327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:45:43.644251 env[1736]: time="2025-09-13T00:45:43.643621922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:45:43.645440 env[1736]: time="2025-09-13T00:45:43.645331645Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f75ed425c396b5394bc73375bc5d971c3baa31a4dee114a6ee59f7ff07898d4 pid=2276 runtime=io.containerd.runc.v2 Sep 13 00:45:43.673087 systemd[1]: Started cri-containerd-6f75ed425c396b5394bc73375bc5d971c3baa31a4dee114a6ee59f7ff07898d4.scope. Sep 13 00:45:43.694655 env[1736]: time="2025-09-13T00:45:43.693811858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:45:43.694655 env[1736]: time="2025-09-13T00:45:43.693925900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:45:43.694655 env[1736]: time="2025-09-13T00:45:43.693959544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:45:43.694655 env[1736]: time="2025-09-13T00:45:43.694414160Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/412954892665d07d39e6137a35a2f140998fad880a92f0c6b0158df5f277b8d9 pid=2323 runtime=io.containerd.runc.v2 Sep 13 00:45:43.695514 env[1736]: time="2025-09-13T00:45:43.695433698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:45:43.695514 env[1736]: time="2025-09-13T00:45:43.695473094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:45:43.695514 env[1736]: time="2025-09-13T00:45:43.695489324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:45:43.699673 env[1736]: time="2025-09-13T00:45:43.695913238Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d092490ed0c937338070ff61fa23f8a77f58a18a49dbb5f78903bc991e690cb pid=2304 runtime=io.containerd.runc.v2 Sep 13 00:45:43.739869 systemd[1]: Started cri-containerd-412954892665d07d39e6137a35a2f140998fad880a92f0c6b0158df5f277b8d9.scope. Sep 13 00:45:43.750695 systemd[1]: Started cri-containerd-3d092490ed0c937338070ff61fa23f8a77f58a18a49dbb5f78903bc991e690cb.scope. Sep 13 00:45:43.794142 env[1736]: time="2025-09-13T00:45:43.794089141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-206,Uid:acf327c39b6061201c72e92f01afaff4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f75ed425c396b5394bc73375bc5d971c3baa31a4dee114a6ee59f7ff07898d4\"" Sep 13 00:45:43.810718 env[1736]: time="2025-09-13T00:45:43.810666518Z" level=info msg="CreateContainer within sandbox \"6f75ed425c396b5394bc73375bc5d971c3baa31a4dee114a6ee59f7ff07898d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:45:43.833919 env[1736]: time="2025-09-13T00:45:43.833871297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-206,Uid:e4f915ff71b2b421220936b59d1198ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"412954892665d07d39e6137a35a2f140998fad880a92f0c6b0158df5f277b8d9\"" Sep 13 00:45:43.847837 env[1736]: time="2025-09-13T00:45:43.847792432Z" level=info msg="CreateContainer within sandbox \"412954892665d07d39e6137a35a2f140998fad880a92f0c6b0158df5f277b8d9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:45:43.872226 env[1736]: time="2025-09-13T00:45:43.872179777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-206,Uid:6982ed580ad99cd1d29c0a15e8b0c715,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d092490ed0c937338070ff61fa23f8a77f58a18a49dbb5f78903bc991e690cb\"" Sep 13 00:45:43.872458 env[1736]: time="2025-09-13T00:45:43.872426349Z" level=info msg="CreateContainer within sandbox \"6f75ed425c396b5394bc73375bc5d971c3baa31a4dee114a6ee59f7ff07898d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"650ea77d4747c8f8abe1f5032db93e99006bc52f0b820ffe98f21b9c1942e7bd\"" Sep 13 00:45:43.873283 env[1736]: time="2025-09-13T00:45:43.873253552Z" level=info msg="StartContainer for \"650ea77d4747c8f8abe1f5032db93e99006bc52f0b820ffe98f21b9c1942e7bd\"" Sep 13 00:45:43.880547 env[1736]: time="2025-09-13T00:45:43.880503547Z" level=info msg="CreateContainer within sandbox \"3d092490ed0c937338070ff61fa23f8a77f58a18a49dbb5f78903bc991e690cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:45:43.882514 env[1736]: time="2025-09-13T00:45:43.882473704Z" level=info msg="CreateContainer within sandbox \"412954892665d07d39e6137a35a2f140998fad880a92f0c6b0158df5f277b8d9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"96c54ba48d26e0eeed6dc18a1ea6fd36bb50a7790d7e7e581c0ae54531850ae8\"" Sep 13 00:45:43.883398 env[1736]: time="2025-09-13T00:45:43.883367130Z" level=info msg="StartContainer for \"96c54ba48d26e0eeed6dc18a1ea6fd36bb50a7790d7e7e581c0ae54531850ae8\"" Sep 13 00:45:43.898522 systemd[1]: Started cri-containerd-650ea77d4747c8f8abe1f5032db93e99006bc52f0b820ffe98f21b9c1942e7bd.scope. Sep 13 00:45:43.917068 kubelet[2233]: E0913 00:45:43.917022 2233 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:45:43.917534 env[1736]: time="2025-09-13T00:45:43.917483360Z" level=info msg="CreateContainer within sandbox \"3d092490ed0c937338070ff61fa23f8a77f58a18a49dbb5f78903bc991e690cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1200a51f6fab35a85ffb4eec0a58760856ca815b18346a143c937268be18d833\"" Sep 13 00:45:43.919929 env[1736]: time="2025-09-13T00:45:43.918331239Z" level=info msg="StartContainer for \"1200a51f6fab35a85ffb4eec0a58760856ca815b18346a143c937268be18d833\"" Sep 13 00:45:43.924508 systemd[1]: Started cri-containerd-96c54ba48d26e0eeed6dc18a1ea6fd36bb50a7790d7e7e581c0ae54531850ae8.scope. Sep 13 00:45:43.974175 systemd[1]: Started cri-containerd-1200a51f6fab35a85ffb4eec0a58760856ca815b18346a143c937268be18d833.scope. Sep 13 00:45:43.997079 env[1736]: time="2025-09-13T00:45:43.997022039Z" level=info msg="StartContainer for \"650ea77d4747c8f8abe1f5032db93e99006bc52f0b820ffe98f21b9c1942e7bd\" returns successfully" Sep 13 00:45:44.032629 env[1736]: time="2025-09-13T00:45:44.031820097Z" level=info msg="StartContainer for \"96c54ba48d26e0eeed6dc18a1ea6fd36bb50a7790d7e7e581c0ae54531850ae8\" returns successfully" Sep 13 00:45:44.049556 kubelet[2233]: E0913 00:45:44.049473 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-206?timeout=10s\": dial tcp 172.31.31.206:6443: connect: connection refused" interval="1.6s" Sep 13 00:45:44.070059 env[1736]: time="2025-09-13T00:45:44.070004615Z" level=info msg="StartContainer for \"1200a51f6fab35a85ffb4eec0a58760856ca815b18346a143c937268be18d833\" returns successfully" Sep 13 00:45:44.215501 kubelet[2233]: I0913 00:45:44.215060 2233 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-206" Sep 13 00:45:44.215760 kubelet[2233]: E0913 00:45:44.215722 2233 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.206:6443/api/v1/nodes\": dial tcp 172.31.31.206:6443: connect: connection refused" node="ip-172-31-31-206" Sep 13 00:45:44.680401 kubelet[2233]: E0913 00:45:44.680355 2233 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-206\" not found" node="ip-172-31-31-206" Sep 13 00:45:44.685922 kubelet[2233]: E0913 00:45:44.685892 2233 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-206\" not found" node="ip-172-31-31-206" Sep 13 00:45:44.688731 kubelet[2233]: E0913 00:45:44.688698 2233 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-206\" not found" node="ip-172-31-31-206" Sep 13 00:45:44.705533 kubelet[2233]: E0913 00:45:44.705479 2233 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.206:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:45:45.501661 kubelet[2233]: E0913 00:45:45.501618 2233 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-206&limit=500&resourceVersion=0\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:45:45.650402 kubelet[2233]: E0913 00:45:45.650345 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-206?timeout=10s\": dial tcp 172.31.31.206:6443: connect: connection refused" interval="3.2s" Sep 13 00:45:45.690212 kubelet[2233]: E0913 00:45:45.690184 2233 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-206\" not found" node="ip-172-31-31-206" Sep 13 00:45:45.690663 kubelet[2233]: E0913 00:45:45.690644 2233 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-206\" not found" node="ip-172-31-31-206" Sep 13 00:45:45.818119 kubelet[2233]: I0913 00:45:45.818087 2233 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-206" Sep 13 00:45:45.818474 kubelet[2233]: E0913 00:45:45.818443 2233 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.206:6443/api/v1/nodes\": dial tcp 172.31.31.206:6443: connect: connection refused" node="ip-172-31-31-206" Sep 13 00:45:46.014080 kubelet[2233]: E0913 00:45:46.014026 2233 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:45:46.470586 kubelet[2233]: E0913 00:45:46.470548 2233 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:45:46.670886 kubelet[2233]: E0913 00:45:46.670847 2233 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:45:49.020441 kubelet[2233]: I0913 00:45:49.020144 2233 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-206" Sep 13 00:45:49.883729 kubelet[2233]: E0913 00:45:49.883699 2233 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-206\" not found" node="ip-172-31-31-206" Sep 13 00:45:50.573485 kubelet[2233]: E0913 00:45:50.573460 2233 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-206\" not found" node="ip-172-31-31-206" Sep 13 00:45:50.726143 kubelet[2233]: E0913 00:45:50.726111 2233 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-206\" not found" node="ip-172-31-31-206" Sep 13 00:45:50.788483 kubelet[2233]: I0913 00:45:50.788437 2233 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-206" Sep 13 00:45:50.788483 kubelet[2233]: E0913 00:45:50.788481 2233 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-206\": node \"ip-172-31-31-206\" not found" Sep 13 00:45:50.845836 kubelet[2233]: I0913 00:45:50.845673 2233 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-206" Sep 13 00:45:50.856096 kubelet[2233]: E0913 00:45:50.856067 2233 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-206\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-206" Sep 13 00:45:50.856303 kubelet[2233]: I0913 00:45:50.856293 2233 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-206" Sep 13 00:45:50.857937 kubelet[2233]: E0913 00:45:50.857902 2233 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-206\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-206" Sep 13 00:45:50.858100 kubelet[2233]: I0913 00:45:50.858090 2233 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-206" Sep 13 00:45:50.860178 kubelet[2233]: E0913 00:45:50.860136 2233 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-206\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-206" Sep 13 00:45:51.601485 kubelet[2233]: I0913 00:45:51.601445 2233 apiserver.go:52] "Watching apiserver" Sep 13 00:45:51.645893 kubelet[2233]: I0913 00:45:51.645852 2233 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:45:52.824538 systemd[1]: Reloading. Sep 13 00:45:52.910901 /usr/lib/systemd/system-generators/torcx-generator[2533]: time="2025-09-13T00:45:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:45:52.910931 /usr/lib/systemd/system-generators/torcx-generator[2533]: time="2025-09-13T00:45:52Z" level=info msg="torcx already run" Sep 13 00:45:53.002506 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:45:53.002527 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:45:53.020934 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:45:53.136187 systemd[1]: Stopping kubelet.service... Sep 13 00:45:53.156140 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:45:53.156331 systemd[1]: Stopped kubelet.service. Sep 13 00:45:53.156423 systemd[1]: kubelet.service: Consumed 1.154s CPU time. Sep 13 00:45:53.158416 systemd[1]: Starting kubelet.service... Sep 13 00:45:54.673771 systemd[1]: Started kubelet.service. Sep 13 00:45:54.752466 kubelet[2592]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:45:54.752898 kubelet[2592]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:45:54.752993 kubelet[2592]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:45:54.753211 kubelet[2592]: I0913 00:45:54.753168 2592 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:45:54.770512 kubelet[2592]: I0913 00:45:54.770481 2592 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:45:54.770678 kubelet[2592]: I0913 00:45:54.770648 2592 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:45:54.770947 kubelet[2592]: I0913 00:45:54.770923 2592 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:45:54.772211 kubelet[2592]: I0913 00:45:54.772195 2592 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:45:54.783128 kubelet[2592]: I0913 00:45:54.783099 2592 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:45:54.787378 kubelet[2592]: E0913 00:45:54.787352 2592 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:45:54.787564 kubelet[2592]: I0913 00:45:54.787555 2592 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:45:54.791460 kubelet[2592]: I0913 00:45:54.791433 2592 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:45:54.791846 kubelet[2592]: I0913 00:45:54.791817 2592 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:45:54.792191 kubelet[2592]: I0913 00:45:54.791973 2592 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-206","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:45:54.792336 kubelet[2592]: I0913 00:45:54.792322 2592 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:45:54.792385 kubelet[2592]: I0913 00:45:54.792380 2592 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:45:54.792498 kubelet[2592]: I0913 00:45:54.792491 2592 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:45:54.794180 kubelet[2592]: I0913 00:45:54.792787 2592 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:45:54.794180 kubelet[2592]: I0913 00:45:54.792869 2592 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:45:54.794180 kubelet[2592]: I0913 00:45:54.792907 2592 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:45:54.794180 kubelet[2592]: I0913 00:45:54.792931 2592 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:45:54.808383 kubelet[2592]: I0913 00:45:54.808360 2592 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:45:54.808937 kubelet[2592]: I0913 00:45:54.808921 2592 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:45:54.818910 kubelet[2592]: I0913 00:45:54.818893 2592 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:45:54.819080 kubelet[2592]: I0913 00:45:54.819073 2592 server.go:1289] "Started kubelet" Sep 13 00:45:54.831261 kubelet[2592]: I0913 00:45:54.831239 2592 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:45:54.831638 sudo[2607]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:45:54.831883 sudo[2607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:45:54.832725 kubelet[2592]: I0913 00:45:54.832697 2592 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:45:54.833579 kubelet[2592]: I0913 00:45:54.833564 2592 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:45:54.834907 kubelet[2592]: I0913 00:45:54.834868 2592 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:45:54.835489 kubelet[2592]: I0913 00:45:54.835476 2592 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:45:54.837015 kubelet[2592]: I0913 00:45:54.837002 2592 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:45:54.841195 kubelet[2592]: E0913 00:45:54.841179 2592 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:45:54.842744 kubelet[2592]: I0913 00:45:54.842731 2592 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:45:54.843270 kubelet[2592]: I0913 00:45:54.843257 2592 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:45:54.843475 kubelet[2592]: I0913 00:45:54.843466 2592 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:45:54.845672 kubelet[2592]: I0913 00:45:54.845642 2592 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:45:54.846896 kubelet[2592]: I0913 00:45:54.846884 2592 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:45:54.846984 kubelet[2592]: I0913 00:45:54.846977 2592 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:45:54.860579 kubelet[2592]: I0913 00:45:54.860551 2592 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:45:54.863685 kubelet[2592]: I0913 00:45:54.863668 2592 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:45:54.864138 kubelet[2592]: I0913 00:45:54.864125 2592 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:45:54.864222 kubelet[2592]: I0913 00:45:54.864215 2592 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:45:54.864265 kubelet[2592]: I0913 00:45:54.864260 2592 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:45:54.864348 kubelet[2592]: E0913 00:45:54.864336 2592 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:45:54.904414 kubelet[2592]: I0913 00:45:54.904394 2592 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:45:54.904571 kubelet[2592]: I0913 00:45:54.904561 2592 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:45:54.904651 kubelet[2592]: I0913 00:45:54.904645 2592 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:45:54.904831 kubelet[2592]: I0913 00:45:54.904821 2592 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:45:54.904905 kubelet[2592]: I0913 00:45:54.904887 2592 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:45:54.904949 kubelet[2592]: I0913 00:45:54.904944 2592 policy_none.go:49] "None policy: Start" Sep 13 00:45:54.905015 kubelet[2592]: I0913 00:45:54.905009 2592 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:45:54.905058 kubelet[2592]: I0913 00:45:54.905053 2592 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:45:54.905299 kubelet[2592]: I0913 00:45:54.905285 2592 state_mem.go:75] "Updated machine memory state" Sep 13 00:45:54.918669 kubelet[2592]: E0913 00:45:54.911931 2592 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:45:54.918669 kubelet[2592]: I0913 00:45:54.912146 2592 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:45:54.918669 kubelet[2592]: I0913 00:45:54.912170 2592 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:45:54.920462 kubelet[2592]: I0913 00:45:54.920444 2592 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:45:54.922701 kubelet[2592]: E0913 00:45:54.922683 2592 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:45:54.966643 kubelet[2592]: I0913 00:45:54.965704 2592 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-206" Sep 13 00:45:54.966643 kubelet[2592]: I0913 00:45:54.966085 2592 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-206" Sep 13 00:45:54.966643 kubelet[2592]: I0913 00:45:54.966298 2592 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-206" Sep 13 00:45:55.023670 kubelet[2592]: I0913 00:45:55.023640 2592 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-206" Sep 13 00:45:55.034520 kubelet[2592]: I0913 00:45:55.034489 2592 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-31-206" Sep 13 00:45:55.034693 kubelet[2592]: I0913 00:45:55.034566 2592 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-206" Sep 13 00:45:55.045192 kubelet[2592]: I0913 00:45:55.045154 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4f915ff71b2b421220936b59d1198ed-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-206\" (UID: \"e4f915ff71b2b421220936b59d1198ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-206" Sep 13 00:45:55.045773 kubelet[2592]: I0913 00:45:55.045747 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e4f915ff71b2b421220936b59d1198ed-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-206\" (UID: \"e4f915ff71b2b421220936b59d1198ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-206" Sep 13 00:45:55.046188 kubelet[2592]: I0913 00:45:55.046161 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4f915ff71b2b421220936b59d1198ed-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-206\" (UID: \"e4f915ff71b2b421220936b59d1198ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-206" Sep 13 00:45:55.046293 kubelet[2592]: I0913 00:45:55.046195 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/acf327c39b6061201c72e92f01afaff4-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-206\" (UID: \"acf327c39b6061201c72e92f01afaff4\") " pod="kube-system/kube-scheduler-ip-172-31-31-206" Sep 13 00:45:55.046293 kubelet[2592]: I0913 00:45:55.046218 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6982ed580ad99cd1d29c0a15e8b0c715-ca-certs\") pod \"kube-apiserver-ip-172-31-31-206\" (UID: \"6982ed580ad99cd1d29c0a15e8b0c715\") " pod="kube-system/kube-apiserver-ip-172-31-31-206" Sep 13 00:45:55.046293 kubelet[2592]: I0913 00:45:55.046251 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6982ed580ad99cd1d29c0a15e8b0c715-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-206\" (UID: \"6982ed580ad99cd1d29c0a15e8b0c715\") " pod="kube-system/kube-apiserver-ip-172-31-31-206" Sep 13 00:45:55.046293 kubelet[2592]: I0913 00:45:55.046274 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6982ed580ad99cd1d29c0a15e8b0c715-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-206\" (UID: \"6982ed580ad99cd1d29c0a15e8b0c715\") " pod="kube-system/kube-apiserver-ip-172-31-31-206" Sep 13 00:45:55.046472 kubelet[2592]: I0913 00:45:55.046302 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4f915ff71b2b421220936b59d1198ed-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-206\" (UID: \"e4f915ff71b2b421220936b59d1198ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-206" Sep 13 00:45:55.046472 kubelet[2592]: I0913 00:45:55.046325 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4f915ff71b2b421220936b59d1198ed-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-206\" (UID: \"e4f915ff71b2b421220936b59d1198ed\") " pod="kube-system/kube-controller-manager-ip-172-31-31-206" Sep 13 00:45:55.808194 kubelet[2592]: I0913 00:45:55.808156 2592 apiserver.go:52] "Watching apiserver" Sep 13 00:45:55.844474 kubelet[2592]: I0913 00:45:55.844439 2592 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:45:55.907970 kubelet[2592]: I0913 00:45:55.907904 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-206" podStartSLOduration=1.907890943 podStartE2EDuration="1.907890943s" podCreationTimestamp="2025-09-13 00:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:45:55.907218444 +0000 UTC m=+1.212340943" watchObservedRunningTime="2025-09-13 00:45:55.907890943 +0000 UTC m=+1.213013429" Sep 13 00:45:55.918015 kubelet[2592]: I0913 00:45:55.917958 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-206" podStartSLOduration=1.917942521 podStartE2EDuration="1.917942521s" podCreationTimestamp="2025-09-13 00:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:45:55.917717174 +0000 UTC m=+1.222839678" watchObservedRunningTime="2025-09-13 00:45:55.917942521 +0000 UTC m=+1.223065006" Sep 13 00:45:55.941501 kubelet[2592]: I0913 00:45:55.941451 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-206" podStartSLOduration=1.941429528 podStartE2EDuration="1.941429528s" podCreationTimestamp="2025-09-13 00:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:45:55.928803799 +0000 UTC m=+1.233926307" watchObservedRunningTime="2025-09-13 00:45:55.941429528 +0000 UTC m=+1.246552016" Sep 13 00:45:56.019528 update_engine[1732]: I0913 00:45:56.019454 1732 update_attempter.cc:509] Updating boot flags... Sep 13 00:45:56.947079 sudo[2607]: pam_unix(sudo:session): session closed for user root Sep 13 00:45:57.113423 kubelet[2592]: I0913 00:45:57.113393 2592 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:45:57.114096 env[1736]: time="2025-09-13T00:45:57.114063251Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:45:57.114548 kubelet[2592]: I0913 00:45:57.114527 2592 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:45:58.246731 systemd[1]: Created slice kubepods-besteffort-podbe22b1a9_f42e_41eb_b430_1ee15002bd37.slice. Sep 13 00:45:58.252268 systemd[1]: Created slice kubepods-burstable-pod125193e0_d43f_44b5_8acd_968f743b6e72.slice. Sep 13 00:45:58.280254 kubelet[2592]: I0913 00:45:58.280227 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be22b1a9-f42e-41eb-b430-1ee15002bd37-kube-proxy\") pod \"kube-proxy-6vnhs\" (UID: \"be22b1a9-f42e-41eb-b430-1ee15002bd37\") " pod="kube-system/kube-proxy-6vnhs" Sep 13 00:45:58.280691 kubelet[2592]: I0913 00:45:58.280674 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be22b1a9-f42e-41eb-b430-1ee15002bd37-xtables-lock\") pod \"kube-proxy-6vnhs\" (UID: \"be22b1a9-f42e-41eb-b430-1ee15002bd37\") " pod="kube-system/kube-proxy-6vnhs" Sep 13 00:45:58.280787 kubelet[2592]: I0913 00:45:58.280776 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-etc-cni-netd\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.280874 kubelet[2592]: I0913 00:45:58.280864 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/125193e0-d43f-44b5-8acd-968f743b6e72-cilium-config-path\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.280962 kubelet[2592]: I0913 00:45:58.280942 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-host-proc-sys-net\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.281044 kubelet[2592]: I0913 00:45:58.281035 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-host-proc-sys-kernel\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.281254 kubelet[2592]: I0913 00:45:58.281240 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-hostproc\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.281345 kubelet[2592]: I0913 00:45:58.281335 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-cni-path\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.281412 kubelet[2592]: I0913 00:45:58.281403 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-lib-modules\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.281482 kubelet[2592]: I0913 00:45:58.281469 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-xtables-lock\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.281559 kubelet[2592]: I0913 00:45:58.281550 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/125193e0-d43f-44b5-8acd-968f743b6e72-clustermesh-secrets\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.281627 kubelet[2592]: I0913 00:45:58.281619 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/125193e0-d43f-44b5-8acd-968f743b6e72-hubble-tls\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.281697 kubelet[2592]: I0913 00:45:58.281689 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be22b1a9-f42e-41eb-b430-1ee15002bd37-lib-modules\") pod \"kube-proxy-6vnhs\" (UID: \"be22b1a9-f42e-41eb-b430-1ee15002bd37\") " pod="kube-system/kube-proxy-6vnhs" Sep 13 00:45:58.281759 kubelet[2592]: I0913 00:45:58.281751 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-cilium-run\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.281827 kubelet[2592]: I0913 00:45:58.281812 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8phg\" (UniqueName: \"kubernetes.io/projected/125193e0-d43f-44b5-8acd-968f743b6e72-kube-api-access-s8phg\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.281907 kubelet[2592]: I0913 00:45:58.281897 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h89nr\" (UniqueName: \"kubernetes.io/projected/be22b1a9-f42e-41eb-b430-1ee15002bd37-kube-api-access-h89nr\") pod \"kube-proxy-6vnhs\" (UID: \"be22b1a9-f42e-41eb-b430-1ee15002bd37\") " pod="kube-system/kube-proxy-6vnhs" Sep 13 00:45:58.281979 kubelet[2592]: I0913 00:45:58.281970 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-bpf-maps\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.282049 kubelet[2592]: I0913 00:45:58.282040 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-cilium-cgroup\") pod \"cilium-tqd9t\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " pod="kube-system/cilium-tqd9t" Sep 13 00:45:58.323025 systemd[1]: Created slice kubepods-besteffort-pod70513f69_0ebc_4eb7_9df0_10b1ff3a073a.slice. Sep 13 00:45:58.382546 kubelet[2592]: I0913 00:45:58.382494 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9lx2\" (UniqueName: \"kubernetes.io/projected/70513f69-0ebc-4eb7-9df0-10b1ff3a073a-kube-api-access-z9lx2\") pod \"cilium-operator-6c4d7847fc-7v6qq\" (UID: \"70513f69-0ebc-4eb7-9df0-10b1ff3a073a\") " pod="kube-system/cilium-operator-6c4d7847fc-7v6qq" Sep 13 00:45:58.382731 kubelet[2592]: I0913 00:45:58.382592 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70513f69-0ebc-4eb7-9df0-10b1ff3a073a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7v6qq\" (UID: \"70513f69-0ebc-4eb7-9df0-10b1ff3a073a\") " pod="kube-system/cilium-operator-6c4d7847fc-7v6qq" Sep 13 00:45:58.383372 kubelet[2592]: I0913 00:45:58.383345 2592 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:45:58.556279 env[1736]: time="2025-09-13T00:45:58.556224777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tqd9t,Uid:125193e0-d43f-44b5-8acd-968f743b6e72,Namespace:kube-system,Attempt:0,}" Sep 13 00:45:58.561004 env[1736]: time="2025-09-13T00:45:58.560955132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6vnhs,Uid:be22b1a9-f42e-41eb-b430-1ee15002bd37,Namespace:kube-system,Attempt:0,}" Sep 13 00:45:58.592691 env[1736]: time="2025-09-13T00:45:58.592639847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:45:58.592885 env[1736]: time="2025-09-13T00:45:58.592852338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:45:58.592974 env[1736]: time="2025-09-13T00:45:58.592957104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:45:58.593206 env[1736]: time="2025-09-13T00:45:58.593151314Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad pid=2838 runtime=io.containerd.runc.v2 Sep 13 00:45:58.601236 env[1736]: time="2025-09-13T00:45:58.600998383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:45:58.601236 env[1736]: time="2025-09-13T00:45:58.601050206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:45:58.601236 env[1736]: time="2025-09-13T00:45:58.601060567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:45:58.601650 env[1736]: time="2025-09-13T00:45:58.601339879Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fb11152c234d4e6561aad16be2882841e8c67273c428445a1808bfbea1a3554 pid=2855 runtime=io.containerd.runc.v2 Sep 13 00:45:58.611326 systemd[1]: Started cri-containerd-a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad.scope. Sep 13 00:45:58.630795 env[1736]: time="2025-09-13T00:45:58.629484890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7v6qq,Uid:70513f69-0ebc-4eb7-9df0-10b1ff3a073a,Namespace:kube-system,Attempt:0,}" Sep 13 00:45:58.642198 systemd[1]: Started cri-containerd-2fb11152c234d4e6561aad16be2882841e8c67273c428445a1808bfbea1a3554.scope. Sep 13 00:45:58.682148 env[1736]: time="2025-09-13T00:45:58.682096472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tqd9t,Uid:125193e0-d43f-44b5-8acd-968f743b6e72,Namespace:kube-system,Attempt:0,} returns sandbox id \"a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad\"" Sep 13 00:45:58.685461 env[1736]: time="2025-09-13T00:45:58.685421275Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:45:58.695441 env[1736]: time="2025-09-13T00:45:58.695395091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6vnhs,Uid:be22b1a9-f42e-41eb-b430-1ee15002bd37,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fb11152c234d4e6561aad16be2882841e8c67273c428445a1808bfbea1a3554\"" Sep 13 00:45:58.700029 env[1736]: time="2025-09-13T00:45:58.697722320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:45:58.700029 env[1736]: time="2025-09-13T00:45:58.697774257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:45:58.700029 env[1736]: time="2025-09-13T00:45:58.697801095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:45:58.700029 env[1736]: time="2025-09-13T00:45:58.697963593Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c pid=2916 runtime=io.containerd.runc.v2 Sep 13 00:45:58.705800 env[1736]: time="2025-09-13T00:45:58.705747838Z" level=info msg="CreateContainer within sandbox \"2fb11152c234d4e6561aad16be2882841e8c67273c428445a1808bfbea1a3554\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:45:58.716169 systemd[1]: Started cri-containerd-0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c.scope. Sep 13 00:45:58.773646 env[1736]: time="2025-09-13T00:45:58.773554926Z" level=info msg="CreateContainer within sandbox \"2fb11152c234d4e6561aad16be2882841e8c67273c428445a1808bfbea1a3554\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"90ca39c0108900e37c309e7e9f5f7d5f3cc970293f7d2e38119a70f33dac3c34\"" Sep 13 00:45:58.775997 env[1736]: time="2025-09-13T00:45:58.774635630Z" level=info msg="StartContainer for \"90ca39c0108900e37c309e7e9f5f7d5f3cc970293f7d2e38119a70f33dac3c34\"" Sep 13 00:45:58.785779 env[1736]: time="2025-09-13T00:45:58.785732815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7v6qq,Uid:70513f69-0ebc-4eb7-9df0-10b1ff3a073a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c\"" Sep 13 00:45:58.800261 systemd[1]: Started cri-containerd-90ca39c0108900e37c309e7e9f5f7d5f3cc970293f7d2e38119a70f33dac3c34.scope. Sep 13 00:45:58.847362 env[1736]: time="2025-09-13T00:45:58.846136153Z" level=info msg="StartContainer for \"90ca39c0108900e37c309e7e9f5f7d5f3cc970293f7d2e38119a70f33dac3c34\" returns successfully" Sep 13 00:45:58.905387 kubelet[2592]: I0913 00:45:58.904778 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6vnhs" podStartSLOduration=0.904760855 podStartE2EDuration="904.760855ms" podCreationTimestamp="2025-09-13 00:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:45:58.904402872 +0000 UTC m=+4.209525377" watchObservedRunningTime="2025-09-13 00:45:58.904760855 +0000 UTC m=+4.209883359" Sep 13 00:46:04.231376 amazon-ssm-agent[1796]: 2025-09-13 00:46:04 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 13 00:46:04.758685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1375572725.mount: Deactivated successfully. Sep 13 00:46:07.968059 env[1736]: time="2025-09-13T00:46:07.968006453Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:46:07.970163 env[1736]: time="2025-09-13T00:46:07.970125355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:46:07.971981 env[1736]: time="2025-09-13T00:46:07.971939283Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:46:07.972656 env[1736]: time="2025-09-13T00:46:07.972590600Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:46:07.976191 env[1736]: time="2025-09-13T00:46:07.976148622Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:46:07.978363 env[1736]: time="2025-09-13T00:46:07.978329970Z" level=info msg="CreateContainer within sandbox \"a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:46:08.025565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266706427.mount: Deactivated successfully. Sep 13 00:46:08.039772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560018911.mount: Deactivated successfully. Sep 13 00:46:08.049658 env[1736]: time="2025-09-13T00:46:08.049589746Z" level=info msg="CreateContainer within sandbox \"a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989\"" Sep 13 00:46:08.051517 env[1736]: time="2025-09-13T00:46:08.050362097Z" level=info msg="StartContainer for \"521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989\"" Sep 13 00:46:08.073390 systemd[1]: Started cri-containerd-521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989.scope. Sep 13 00:46:08.090781 systemd[1]: cri-containerd-521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989.scope: Deactivated successfully. Sep 13 00:46:08.108243 env[1736]: time="2025-09-13T00:46:08.108190131Z" level=info msg="shim disconnected" id=521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989 Sep 13 00:46:08.108243 env[1736]: time="2025-09-13T00:46:08.108240411Z" level=warning msg="cleaning up after shim disconnected" id=521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989 namespace=k8s.io Sep 13 00:46:08.108243 env[1736]: time="2025-09-13T00:46:08.108249585Z" level=info msg="cleaning up dead shim" Sep 13 00:46:08.118977 env[1736]: time="2025-09-13T00:46:08.118921006Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:46:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3156 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T00:46:08Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 00:46:08.119335 env[1736]: time="2025-09-13T00:46:08.119218946Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Sep 13 00:46:08.119724 env[1736]: time="2025-09-13T00:46:08.119674425Z" level=error msg="Failed to pipe stderr of container \"521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989\"" error="reading from a closed fifo" Sep 13 00:46:08.119805 env[1736]: time="2025-09-13T00:46:08.119673094Z" level=error msg="Failed to pipe stdout of container \"521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989\"" error="reading from a closed fifo" Sep 13 00:46:08.123242 env[1736]: time="2025-09-13T00:46:08.123159393Z" level=error msg="StartContainer for \"521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 00:46:08.123455 kubelet[2592]: E0913 00:46:08.123408 2592 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989" Sep 13 00:46:08.126420 kubelet[2592]: E0913 00:46:08.126377 2592 kuberuntime_manager.go:1358] "Unhandled Error" err=< Sep 13 00:46:08.126420 kubelet[2592]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 00:46:08.126420 kubelet[2592]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 00:46:08.126420 kubelet[2592]: rm /hostbin/cilium-mount Sep 13 00:46:08.126667 kubelet[2592]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8phg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-tqd9t_kube-system(125193e0-d43f-44b5-8acd-968f743b6e72): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 00:46:08.126667 kubelet[2592]: > logger="UnhandledError" Sep 13 00:46:08.127850 kubelet[2592]: E0913 00:46:08.127758 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-tqd9t" podUID="125193e0-d43f-44b5-8acd-968f743b6e72" Sep 13 00:46:08.951684 env[1736]: time="2025-09-13T00:46:08.951643046Z" level=info msg="StopPodSandbox for \"a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad\"" Sep 13 00:46:08.951886 env[1736]: time="2025-09-13T00:46:08.951700622Z" level=info msg="Container to stop \"521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:46:08.961305 systemd[1]: cri-containerd-a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad.scope: Deactivated successfully. Sep 13 00:46:08.989291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989-rootfs.mount: Deactivated successfully. Sep 13 00:46:08.989400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad-rootfs.mount: Deactivated successfully. Sep 13 00:46:08.989455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad-shm.mount: Deactivated successfully. Sep 13 00:46:08.997186 env[1736]: time="2025-09-13T00:46:08.997130007Z" level=info msg="shim disconnected" id=a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad Sep 13 00:46:08.997759 env[1736]: time="2025-09-13T00:46:08.997725992Z" level=warning msg="cleaning up after shim disconnected" id=a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad namespace=k8s.io Sep 13 00:46:08.997759 env[1736]: time="2025-09-13T00:46:08.997752972Z" level=info msg="cleaning up dead shim" Sep 13 00:46:09.006630 env[1736]: time="2025-09-13T00:46:09.006563390Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:46:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3189 runtime=io.containerd.runc.v2\n" Sep 13 00:46:09.007005 env[1736]: time="2025-09-13T00:46:09.006971011Z" level=info msg="TearDown network for sandbox \"a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad\" successfully" Sep 13 00:46:09.007105 env[1736]: time="2025-09-13T00:46:09.007002942Z" level=info msg="StopPodSandbox for \"a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad\" returns successfully" Sep 13 00:46:09.091389 kubelet[2592]: I0913 00:46:09.091353 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/125193e0-d43f-44b5-8acd-968f743b6e72-cilium-config-path\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.091389 kubelet[2592]: I0913 00:46:09.091386 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-hostproc\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.091638 kubelet[2592]: I0913 00:46:09.091407 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-lib-modules\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.091638 kubelet[2592]: I0913 00:46:09.091422 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-bpf-maps\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.091638 kubelet[2592]: I0913 00:46:09.091440 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-host-proc-sys-kernel\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.091638 kubelet[2592]: I0913 00:46:09.091453 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-xtables-lock\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.091638 kubelet[2592]: I0913 00:46:09.091472 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/125193e0-d43f-44b5-8acd-968f743b6e72-clustermesh-secrets\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.091638 kubelet[2592]: I0913 00:46:09.091486 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-etc-cni-netd\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.091638 kubelet[2592]: I0913 00:46:09.091501 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-host-proc-sys-net\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.091638 kubelet[2592]: I0913 00:46:09.091517 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/125193e0-d43f-44b5-8acd-968f743b6e72-hubble-tls\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.091638 kubelet[2592]: I0913 00:46:09.091532 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-cilium-run\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.091638 kubelet[2592]: I0913 00:46:09.091546 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-cilium-cgroup\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.091638 kubelet[2592]: I0913 00:46:09.091561 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-cni-path\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.091638 kubelet[2592]: I0913 00:46:09.091576 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8phg\" (UniqueName: \"kubernetes.io/projected/125193e0-d43f-44b5-8acd-968f743b6e72-kube-api-access-s8phg\") pod \"125193e0-d43f-44b5-8acd-968f743b6e72\" (UID: \"125193e0-d43f-44b5-8acd-968f743b6e72\") " Sep 13 00:46:09.093135 kubelet[2592]: I0913 00:46:09.092690 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:46:09.093135 kubelet[2592]: I0913 00:46:09.092764 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:46:09.093672 kubelet[2592]: I0913 00:46:09.093650 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:46:09.093802 kubelet[2592]: I0913 00:46:09.093791 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:46:09.093880 kubelet[2592]: I0913 00:46:09.093871 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-cni-path" (OuterVolumeSpecName: "cni-path") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:46:09.093954 kubelet[2592]: I0913 00:46:09.093945 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:46:09.096536 systemd[1]: var-lib-kubelet-pods-125193e0\x2dd43f\x2d44b5\x2d8acd\x2d968f743b6e72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds8phg.mount: Deactivated successfully. Sep 13 00:46:09.103551 kubelet[2592]: I0913 00:46:09.098659 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125193e0-d43f-44b5-8acd-968f743b6e72-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:46:09.103551 kubelet[2592]: I0913 00:46:09.098713 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:46:09.103551 kubelet[2592]: I0913 00:46:09.098731 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:46:09.103551 kubelet[2592]: I0913 00:46:09.098746 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:46:09.103551 kubelet[2592]: I0913 00:46:09.098762 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-hostproc" (OuterVolumeSpecName: "hostproc") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:46:09.103551 kubelet[2592]: I0913 00:46:09.099719 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125193e0-d43f-44b5-8acd-968f743b6e72-kube-api-access-s8phg" (OuterVolumeSpecName: "kube-api-access-s8phg") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "kube-api-access-s8phg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:46:09.103551 kubelet[2592]: I0913 00:46:09.101279 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125193e0-d43f-44b5-8acd-968f743b6e72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:46:09.102128 systemd[1]: var-lib-kubelet-pods-125193e0\x2dd43f\x2d44b5\x2d8acd\x2d968f743b6e72-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:46:09.105852 kubelet[2592]: I0913 00:46:09.105826 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125193e0-d43f-44b5-8acd-968f743b6e72-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "125193e0-d43f-44b5-8acd-968f743b6e72" (UID: "125193e0-d43f-44b5-8acd-968f743b6e72"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:46:09.106076 systemd[1]: var-lib-kubelet-pods-125193e0\x2dd43f\x2d44b5\x2d8acd\x2d968f743b6e72-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:46:09.192844 kubelet[2592]: I0913 00:46:09.192809 2592 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/125193e0-d43f-44b5-8acd-968f743b6e72-cilium-config-path\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.192844 kubelet[2592]: I0913 00:46:09.192846 2592 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-hostproc\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.193308 kubelet[2592]: I0913 00:46:09.192859 2592 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-lib-modules\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.193308 kubelet[2592]: I0913 00:46:09.192873 2592 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-bpf-maps\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.193308 kubelet[2592]: I0913 00:46:09.192884 2592 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-host-proc-sys-kernel\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.193308 kubelet[2592]: I0913 00:46:09.192894 2592 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-xtables-lock\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.193308 kubelet[2592]: I0913 00:46:09.192906 2592 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/125193e0-d43f-44b5-8acd-968f743b6e72-clustermesh-secrets\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.193308 kubelet[2592]: I0913 00:46:09.192916 2592 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-etc-cni-netd\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.193308 kubelet[2592]: I0913 00:46:09.192927 2592 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-host-proc-sys-net\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.193308 kubelet[2592]: I0913 00:46:09.192939 2592 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/125193e0-d43f-44b5-8acd-968f743b6e72-hubble-tls\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.193308 kubelet[2592]: I0913 00:46:09.192950 2592 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-cilium-run\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.193308 kubelet[2592]: I0913 00:46:09.192964 2592 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-cilium-cgroup\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.193308 kubelet[2592]: I0913 00:46:09.192976 2592 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/125193e0-d43f-44b5-8acd-968f743b6e72-cni-path\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.193308 kubelet[2592]: I0913 00:46:09.192988 2592 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s8phg\" (UniqueName: \"kubernetes.io/projected/125193e0-d43f-44b5-8acd-968f743b6e72-kube-api-access-s8phg\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:46:09.955348 kubelet[2592]: I0913 00:46:09.955314 2592 scope.go:117] "RemoveContainer" containerID="521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989" Sep 13 00:46:09.964750 systemd[1]: Removed slice kubepods-burstable-pod125193e0_d43f_44b5_8acd_968f743b6e72.slice. Sep 13 00:46:09.985529 env[1736]: time="2025-09-13T00:46:09.985483397Z" level=info msg="RemoveContainer for \"521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989\"" Sep 13 00:46:09.989968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544166060.mount: Deactivated successfully. Sep 13 00:46:09.995864 env[1736]: time="2025-09-13T00:46:09.995813057Z" level=info msg="RemoveContainer for \"521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989\" returns successfully" Sep 13 00:46:10.056593 env[1736]: time="2025-09-13T00:46:10.056543439Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:46:10.061483 env[1736]: time="2025-09-13T00:46:10.061413868Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:46:10.065240 env[1736]: time="2025-09-13T00:46:10.065190846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:46:10.066169 env[1736]: time="2025-09-13T00:46:10.066120917Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:46:10.073516 env[1736]: time="2025-09-13T00:46:10.073469935Z" level=info msg="CreateContainer within sandbox \"0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:46:10.108828 env[1736]: time="2025-09-13T00:46:10.108768956Z" level=info msg="CreateContainer within sandbox \"0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac\"" Sep 13 00:46:10.109624 env[1736]: time="2025-09-13T00:46:10.109575853Z" level=info msg="StartContainer for \"47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac\"" Sep 13 00:46:10.142429 systemd[1]: Started cri-containerd-47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac.scope. Sep 13 00:46:10.217222 systemd[1]: Created slice kubepods-burstable-podef130339_b9b2_4c11_bf34_8fe5bc1ff2c5.slice. Sep 13 00:46:10.245218 env[1736]: time="2025-09-13T00:46:10.245170181Z" level=info msg="StartContainer for \"47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac\" returns successfully" Sep 13 00:46:10.299505 kubelet[2592]: I0913 00:46:10.299447 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-bpf-maps\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.299505 kubelet[2592]: I0913 00:46:10.299503 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-hostproc\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.300066 kubelet[2592]: I0913 00:46:10.299524 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cni-path\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.300066 kubelet[2592]: I0913 00:46:10.299543 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-etc-cni-netd\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.300066 kubelet[2592]: I0913 00:46:10.299564 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-lib-modules\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.300066 kubelet[2592]: I0913 00:46:10.299585 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-host-proc-sys-net\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.300066 kubelet[2592]: I0913 00:46:10.299622 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cilium-run\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.300066 kubelet[2592]: I0913 00:46:10.299674 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cilium-cgroup\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.300066 kubelet[2592]: I0913 00:46:10.299699 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-xtables-lock\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.300066 kubelet[2592]: I0913 00:46:10.299722 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cilium-config-path\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.300066 kubelet[2592]: I0913 00:46:10.299746 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-hubble-tls\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.300066 kubelet[2592]: I0913 00:46:10.299772 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl7gk\" (UniqueName: \"kubernetes.io/projected/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-kube-api-access-jl7gk\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.300066 kubelet[2592]: I0913 00:46:10.299805 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-clustermesh-secrets\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.300066 kubelet[2592]: I0913 00:46:10.299836 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-host-proc-sys-kernel\") pod \"cilium-656dt\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " pod="kube-system/cilium-656dt" Sep 13 00:46:10.524558 env[1736]: time="2025-09-13T00:46:10.524057704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-656dt,Uid:ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5,Namespace:kube-system,Attempt:0,}" Sep 13 00:46:10.557260 env[1736]: time="2025-09-13T00:46:10.557160726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:46:10.557430 env[1736]: time="2025-09-13T00:46:10.557272081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:46:10.557430 env[1736]: time="2025-09-13T00:46:10.557303156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:46:10.557551 env[1736]: time="2025-09-13T00:46:10.557515023Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33 pid=3251 runtime=io.containerd.runc.v2 Sep 13 00:46:10.604096 systemd[1]: Started cri-containerd-3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33.scope. Sep 13 00:46:10.676314 env[1736]: time="2025-09-13T00:46:10.676268918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-656dt,Uid:ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\"" Sep 13 00:46:10.683690 env[1736]: time="2025-09-13T00:46:10.683646030Z" level=info msg="CreateContainer within sandbox \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:46:10.710177 env[1736]: time="2025-09-13T00:46:10.710124220Z" level=info msg="CreateContainer within sandbox \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f\"" Sep 13 00:46:10.711399 env[1736]: time="2025-09-13T00:46:10.711367651Z" level=info msg="StartContainer for \"3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f\"" Sep 13 00:46:10.734299 systemd[1]: Started cri-containerd-3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f.scope. Sep 13 00:46:10.801740 env[1736]: time="2025-09-13T00:46:10.801684116Z" level=info msg="StartContainer for \"3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f\" returns successfully" Sep 13 00:46:10.867828 kubelet[2592]: I0913 00:46:10.867793 2592 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="125193e0-d43f-44b5-8acd-968f743b6e72" path="/var/lib/kubelet/pods/125193e0-d43f-44b5-8acd-968f743b6e72/volumes" Sep 13 00:46:10.890112 systemd[1]: cri-containerd-3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f.scope: Deactivated successfully. Sep 13 00:46:10.945296 env[1736]: time="2025-09-13T00:46:10.945236564Z" level=info msg="shim disconnected" id=3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f Sep 13 00:46:10.945296 env[1736]: time="2025-09-13T00:46:10.945298106Z" level=warning msg="cleaning up after shim disconnected" id=3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f namespace=k8s.io Sep 13 00:46:10.945747 env[1736]: time="2025-09-13T00:46:10.945309370Z" level=info msg="cleaning up dead shim" Sep 13 00:46:10.999038 env[1736]: time="2025-09-13T00:46:10.997706402Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:46:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3334 runtime=io.containerd.runc.v2\n" Sep 13 00:46:11.212518 kubelet[2592]: W0913 00:46:11.212393 2592 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod125193e0_d43f_44b5_8acd_968f743b6e72.slice/cri-containerd-521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989.scope WatchSource:0}: container "521d933c283c0d1c8001e76cf56bad1bd8cd58750be63d6ff070668a47359989" in namespace "k8s.io": not found Sep 13 00:46:11.991368 env[1736]: time="2025-09-13T00:46:11.991316080Z" level=info msg="CreateContainer within sandbox \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:46:12.016934 env[1736]: time="2025-09-13T00:46:12.016878350Z" level=info msg="CreateContainer within sandbox \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5\"" Sep 13 00:46:12.017714 env[1736]: time="2025-09-13T00:46:12.017679447Z" level=info msg="StartContainer for \"89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5\"" Sep 13 00:46:12.017805 kubelet[2592]: I0913 00:46:12.017686 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7v6qq" podStartSLOduration=2.737346408 podStartE2EDuration="14.017670895s" podCreationTimestamp="2025-09-13 00:45:58 +0000 UTC" firstStartedPulling="2025-09-13 00:45:58.787701106 +0000 UTC m=+4.092823598" lastFinishedPulling="2025-09-13 00:46:10.068025593 +0000 UTC m=+15.373148085" observedRunningTime="2025-09-13 00:46:11.211934894 +0000 UTC m=+16.517057400" watchObservedRunningTime="2025-09-13 00:46:12.017670895 +0000 UTC m=+17.322793394" Sep 13 00:46:12.046853 systemd[1]: Started cri-containerd-89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5.scope. Sep 13 00:46:12.086563 env[1736]: time="2025-09-13T00:46:12.086384860Z" level=info msg="StartContainer for \"89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5\" returns successfully" Sep 13 00:46:12.166114 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:46:12.168125 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:46:12.168655 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:46:12.171148 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:46:12.176111 systemd[1]: cri-containerd-89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5.scope: Deactivated successfully. Sep 13 00:46:12.199501 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:46:12.217750 env[1736]: time="2025-09-13T00:46:12.217704009Z" level=info msg="shim disconnected" id=89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5 Sep 13 00:46:12.217750 env[1736]: time="2025-09-13T00:46:12.217746674Z" level=warning msg="cleaning up after shim disconnected" id=89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5 namespace=k8s.io Sep 13 00:46:12.217750 env[1736]: time="2025-09-13T00:46:12.217755768Z" level=info msg="cleaning up dead shim" Sep 13 00:46:12.227110 env[1736]: time="2025-09-13T00:46:12.227045571Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:46:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3401 runtime=io.containerd.runc.v2\n" Sep 13 00:46:12.993531 env[1736]: time="2025-09-13T00:46:12.993488604Z" level=info msg="CreateContainer within sandbox \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:46:13.008324 systemd[1]: run-containerd-runc-k8s.io-89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5-runc.XbzJ3M.mount: Deactivated successfully. Sep 13 00:46:13.008482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5-rootfs.mount: Deactivated successfully. Sep 13 00:46:13.021497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732247958.mount: Deactivated successfully. Sep 13 00:46:13.036459 env[1736]: time="2025-09-13T00:46:13.036403974Z" level=info msg="CreateContainer within sandbox \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad\"" Sep 13 00:46:13.037261 env[1736]: time="2025-09-13T00:46:13.037225519Z" level=info msg="StartContainer for \"6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad\"" Sep 13 00:46:13.075188 systemd[1]: Started cri-containerd-6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad.scope. Sep 13 00:46:13.110273 env[1736]: time="2025-09-13T00:46:13.110232339Z" level=info msg="StartContainer for \"6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad\" returns successfully" Sep 13 00:46:13.258405 systemd[1]: cri-containerd-6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad.scope: Deactivated successfully. Sep 13 00:46:13.291784 env[1736]: time="2025-09-13T00:46:13.291738303Z" level=info msg="shim disconnected" id=6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad Sep 13 00:46:13.291784 env[1736]: time="2025-09-13T00:46:13.291779519Z" level=warning msg="cleaning up after shim disconnected" id=6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad namespace=k8s.io Sep 13 00:46:13.291784 env[1736]: time="2025-09-13T00:46:13.291791327Z" level=info msg="cleaning up dead shim" Sep 13 00:46:13.300907 env[1736]: time="2025-09-13T00:46:13.300865393Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:46:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3458 runtime=io.containerd.runc.v2\n" Sep 13 00:46:14.001750 env[1736]: time="2025-09-13T00:46:14.001699934Z" level=info msg="CreateContainer within sandbox \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:46:14.008248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad-rootfs.mount: Deactivated successfully. Sep 13 00:46:14.030727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530325954.mount: Deactivated successfully. Sep 13 00:46:14.042658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4029894199.mount: Deactivated successfully. Sep 13 00:46:14.050262 env[1736]: time="2025-09-13T00:46:14.050200935Z" level=info msg="CreateContainer within sandbox \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25\"" Sep 13 00:46:14.051232 env[1736]: time="2025-09-13T00:46:14.051186851Z" level=info msg="StartContainer for \"cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25\"" Sep 13 00:46:14.072381 systemd[1]: Started cri-containerd-cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25.scope. Sep 13 00:46:14.107977 systemd[1]: cri-containerd-cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25.scope: Deactivated successfully. Sep 13 00:46:14.110227 env[1736]: time="2025-09-13T00:46:14.110173694Z" level=info msg="StartContainer for \"cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25\" returns successfully" Sep 13 00:46:14.143938 env[1736]: time="2025-09-13T00:46:14.143880360Z" level=info msg="shim disconnected" id=cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25 Sep 13 00:46:14.143938 env[1736]: time="2025-09-13T00:46:14.143936960Z" level=warning msg="cleaning up after shim disconnected" id=cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25 namespace=k8s.io Sep 13 00:46:14.144179 env[1736]: time="2025-09-13T00:46:14.143947813Z" level=info msg="cleaning up dead shim" Sep 13 00:46:14.155483 env[1736]: time="2025-09-13T00:46:14.155429316Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:46:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3516 runtime=io.containerd.runc.v2\n" Sep 13 00:46:14.325445 kubelet[2592]: W0913 00:46:14.325309 2592 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef130339_b9b2_4c11_bf34_8fe5bc1ff2c5.slice/cri-containerd-3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f.scope WatchSource:0}: task 3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f not found Sep 13 00:46:15.005206 env[1736]: time="2025-09-13T00:46:15.005143175Z" level=info msg="CreateContainer within sandbox \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:46:15.040474 env[1736]: time="2025-09-13T00:46:15.040428099Z" level=info msg="CreateContainer within sandbox \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\"" Sep 13 00:46:15.043082 env[1736]: time="2025-09-13T00:46:15.041204298Z" level=info msg="StartContainer for \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\"" Sep 13 00:46:15.066271 systemd[1]: Started cri-containerd-e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d.scope. Sep 13 00:46:15.106922 env[1736]: time="2025-09-13T00:46:15.106817592Z" level=info msg="StartContainer for \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\" returns successfully" Sep 13 00:46:15.351193 kubelet[2592]: I0913 00:46:15.350327 2592 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:46:15.454615 systemd[1]: Created slice kubepods-burstable-pod09871489_bb2a_48b6_8c3c_bffb898934cb.slice. Sep 13 00:46:15.460380 systemd[1]: Created slice kubepods-burstable-pod25fa21c5_6905_44c2_ad8d_80f655ad9b6f.slice. Sep 13 00:46:15.541630 kubelet[2592]: I0913 00:46:15.541559 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25fa21c5-6905-44c2-ad8d-80f655ad9b6f-config-volume\") pod \"coredns-674b8bbfcf-bvj78\" (UID: \"25fa21c5-6905-44c2-ad8d-80f655ad9b6f\") " pod="kube-system/coredns-674b8bbfcf-bvj78" Sep 13 00:46:15.541630 kubelet[2592]: I0913 00:46:15.541633 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09871489-bb2a-48b6-8c3c-bffb898934cb-config-volume\") pod \"coredns-674b8bbfcf-kl42b\" (UID: \"09871489-bb2a-48b6-8c3c-bffb898934cb\") " pod="kube-system/coredns-674b8bbfcf-kl42b" Sep 13 00:46:15.541838 kubelet[2592]: I0913 00:46:15.541656 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dnw2\" (UniqueName: \"kubernetes.io/projected/09871489-bb2a-48b6-8c3c-bffb898934cb-kube-api-access-2dnw2\") pod \"coredns-674b8bbfcf-kl42b\" (UID: \"09871489-bb2a-48b6-8c3c-bffb898934cb\") " pod="kube-system/coredns-674b8bbfcf-kl42b" Sep 13 00:46:15.541838 kubelet[2592]: I0913 00:46:15.541677 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjnps\" (UniqueName: \"kubernetes.io/projected/25fa21c5-6905-44c2-ad8d-80f655ad9b6f-kube-api-access-zjnps\") pod \"coredns-674b8bbfcf-bvj78\" (UID: \"25fa21c5-6905-44c2-ad8d-80f655ad9b6f\") " pod="kube-system/coredns-674b8bbfcf-bvj78" Sep 13 00:46:15.759296 env[1736]: time="2025-09-13T00:46:15.758945286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kl42b,Uid:09871489-bb2a-48b6-8c3c-bffb898934cb,Namespace:kube-system,Attempt:0,}" Sep 13 00:46:15.764399 env[1736]: time="2025-09-13T00:46:15.764347548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bvj78,Uid:25fa21c5-6905-44c2-ad8d-80f655ad9b6f,Namespace:kube-system,Attempt:0,}" Sep 13 00:46:17.438467 kubelet[2592]: W0913 00:46:17.438429 2592 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef130339_b9b2_4c11_bf34_8fe5bc1ff2c5.slice/cri-containerd-89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5.scope WatchSource:0}: task 89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5 not found Sep 13 00:46:18.019564 systemd[1]: run-containerd-runc-k8s.io-e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d-runc.0XKqvz.mount: Deactivated successfully. Sep 13 00:46:20.550795 kubelet[2592]: W0913 00:46:20.550690 2592 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef130339_b9b2_4c11_bf34_8fe5bc1ff2c5.slice/cri-containerd-6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad.scope WatchSource:0}: task 6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad not found Sep 13 00:46:20.628886 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:46:20.629132 systemd-networkd[1470]: cilium_host: Link UP Sep 13 00:46:20.629247 systemd-networkd[1470]: cilium_net: Link UP Sep 13 00:46:20.629251 systemd-networkd[1470]: cilium_net: Gained carrier Sep 13 00:46:20.629382 systemd-networkd[1470]: cilium_host: Gained carrier Sep 13 00:46:20.629693 (udev-worker)[3669]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:46:20.630164 systemd-networkd[1470]: cilium_host: Gained IPv6LL Sep 13 00:46:20.631302 (udev-worker)[3727]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:46:20.863793 systemd-networkd[1470]: cilium_net: Gained IPv6LL Sep 13 00:46:20.880904 systemd-networkd[1470]: cilium_vxlan: Link UP Sep 13 00:46:20.880916 systemd-networkd[1470]: cilium_vxlan: Gained carrier Sep 13 00:46:21.935773 systemd-networkd[1470]: cilium_vxlan: Gained IPv6LL Sep 13 00:46:22.516434 systemd[1]: run-containerd-runc-k8s.io-e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d-runc.HanKUs.mount: Deactivated successfully. Sep 13 00:46:22.812629 kernel: NET: Registered PF_ALG protocol family Sep 13 00:46:23.590963 systemd-networkd[1470]: lxc_health: Link UP Sep 13 00:46:23.612044 systemd-networkd[1470]: lxc_health: Gained carrier Sep 13 00:46:23.613167 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:46:23.660944 kubelet[2592]: W0913 00:46:23.658943 2592 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef130339_b9b2_4c11_bf34_8fe5bc1ff2c5.slice/cri-containerd-cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25.scope WatchSource:0}: task cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25 not found Sep 13 00:46:23.846477 systemd-networkd[1470]: lxcf4467715d8c9: Link UP Sep 13 00:46:23.857435 (udev-worker)[3739]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:46:23.865531 systemd-networkd[1470]: lxc5c2fae6c3dd9: Link UP Sep 13 00:46:23.870926 (udev-worker)[3738]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:46:23.872655 kernel: eth0: renamed from tmp6de72 Sep 13 00:46:23.885665 kernel: eth0: renamed from tmp8fd51 Sep 13 00:46:23.898062 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5c2fae6c3dd9: link becomes ready Sep 13 00:46:23.897400 systemd-networkd[1470]: lxc5c2fae6c3dd9: Gained carrier Sep 13 00:46:23.902979 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf4467715d8c9: link becomes ready Sep 13 00:46:23.902725 systemd-networkd[1470]: lxcf4467715d8c9: Gained carrier Sep 13 00:46:24.560888 kubelet[2592]: I0913 00:46:24.560803 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-656dt" podStartSLOduration=14.560753897 podStartE2EDuration="14.560753897s" podCreationTimestamp="2025-09-13 00:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:46:16.043701887 +0000 UTC m=+21.348824390" watchObservedRunningTime="2025-09-13 00:46:24.560753897 +0000 UTC m=+29.865876405" Sep 13 00:46:25.391846 systemd-networkd[1470]: lxc_health: Gained IPv6LL Sep 13 00:46:25.647863 systemd-networkd[1470]: lxcf4467715d8c9: Gained IPv6LL Sep 13 00:46:25.903792 systemd-networkd[1470]: lxc5c2fae6c3dd9: Gained IPv6LL Sep 13 00:46:26.973058 systemd[1]: run-containerd-runc-k8s.io-e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d-runc.Xlf4E5.mount: Deactivated successfully. Sep 13 00:46:28.467892 env[1736]: time="2025-09-13T00:46:28.467812327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:46:28.468241 env[1736]: time="2025-09-13T00:46:28.467899542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:46:28.468241 env[1736]: time="2025-09-13T00:46:28.467921070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:46:28.468241 env[1736]: time="2025-09-13T00:46:28.468066658Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6de723b62dbee7536d3a7a7480fa872064aea7d700d0544531bcc8ccb2818cb9 pid=4161 runtime=io.containerd.runc.v2 Sep 13 00:46:28.503662 systemd[1]: run-containerd-runc-k8s.io-6de723b62dbee7536d3a7a7480fa872064aea7d700d0544531bcc8ccb2818cb9-runc.UeTa9n.mount: Deactivated successfully. Sep 13 00:46:28.507092 systemd[1]: Started cri-containerd-6de723b62dbee7536d3a7a7480fa872064aea7d700d0544531bcc8ccb2818cb9.scope. Sep 13 00:46:28.519292 env[1736]: time="2025-09-13T00:46:28.519223465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:46:28.519517 env[1736]: time="2025-09-13T00:46:28.519481547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:46:28.521712 env[1736]: time="2025-09-13T00:46:28.521664972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:46:28.522220 env[1736]: time="2025-09-13T00:46:28.522176212Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8fd51876874937096b319358f86bd5453d12a361e2e394cc0a6a0037fd15a5c4 pid=4189 runtime=io.containerd.runc.v2 Sep 13 00:46:28.554710 systemd[1]: Started cri-containerd-8fd51876874937096b319358f86bd5453d12a361e2e394cc0a6a0037fd15a5c4.scope. Sep 13 00:46:28.643297 env[1736]: time="2025-09-13T00:46:28.643248586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bvj78,Uid:25fa21c5-6905-44c2-ad8d-80f655ad9b6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fd51876874937096b319358f86bd5453d12a361e2e394cc0a6a0037fd15a5c4\"" Sep 13 00:46:28.663536 env[1736]: time="2025-09-13T00:46:28.663485165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kl42b,Uid:09871489-bb2a-48b6-8c3c-bffb898934cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6de723b62dbee7536d3a7a7480fa872064aea7d700d0544531bcc8ccb2818cb9\"" Sep 13 00:46:28.665934 env[1736]: time="2025-09-13T00:46:28.665885052Z" level=info msg="CreateContainer within sandbox \"8fd51876874937096b319358f86bd5453d12a361e2e394cc0a6a0037fd15a5c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:46:28.672865 env[1736]: time="2025-09-13T00:46:28.672748872Z" level=info msg="CreateContainer within sandbox \"6de723b62dbee7536d3a7a7480fa872064aea7d700d0544531bcc8ccb2818cb9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:46:28.735877 env[1736]: time="2025-09-13T00:46:28.735758477Z" level=info msg="CreateContainer within sandbox \"6de723b62dbee7536d3a7a7480fa872064aea7d700d0544531bcc8ccb2818cb9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d96a150fc428c3debb31c002724e378ad0d3479971d5ca6507b64e45d5fa5e4f\"" Sep 13 00:46:28.738016 env[1736]: time="2025-09-13T00:46:28.736454346Z" level=info msg="StartContainer for \"d96a150fc428c3debb31c002724e378ad0d3479971d5ca6507b64e45d5fa5e4f\"" Sep 13 00:46:28.743732 env[1736]: time="2025-09-13T00:46:28.743642585Z" level=info msg="CreateContainer within sandbox \"8fd51876874937096b319358f86bd5453d12a361e2e394cc0a6a0037fd15a5c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e9b495f575f7f25a88392ba4a07ef44762074e34d861887970de3da4108ae3c7\"" Sep 13 00:46:28.745854 env[1736]: time="2025-09-13T00:46:28.745773544Z" level=info msg="StartContainer for \"e9b495f575f7f25a88392ba4a07ef44762074e34d861887970de3da4108ae3c7\"" Sep 13 00:46:28.758355 systemd[1]: Started cri-containerd-d96a150fc428c3debb31c002724e378ad0d3479971d5ca6507b64e45d5fa5e4f.scope. Sep 13 00:46:28.773794 systemd[1]: Started cri-containerd-e9b495f575f7f25a88392ba4a07ef44762074e34d861887970de3da4108ae3c7.scope. Sep 13 00:46:28.851396 env[1736]: time="2025-09-13T00:46:28.851339773Z" level=info msg="StartContainer for \"d96a150fc428c3debb31c002724e378ad0d3479971d5ca6507b64e45d5fa5e4f\" returns successfully" Sep 13 00:46:28.855928 env[1736]: time="2025-09-13T00:46:28.855850249Z" level=info msg="StartContainer for \"e9b495f575f7f25a88392ba4a07ef44762074e34d861887970de3da4108ae3c7\" returns successfully" Sep 13 00:46:29.066078 kubelet[2592]: I0913 00:46:29.066016 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bvj78" podStartSLOduration=31.065999069 podStartE2EDuration="31.065999069s" podCreationTimestamp="2025-09-13 00:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:46:29.065573885 +0000 UTC m=+34.370696395" watchObservedRunningTime="2025-09-13 00:46:29.065999069 +0000 UTC m=+34.371121576" Sep 13 00:46:29.082429 kubelet[2592]: I0913 00:46:29.082356 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kl42b" podStartSLOduration=31.082334826 podStartE2EDuration="31.082334826s" podCreationTimestamp="2025-09-13 00:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:46:29.081869698 +0000 UTC m=+34.386992206" watchObservedRunningTime="2025-09-13 00:46:29.082334826 +0000 UTC m=+34.387457334" Sep 13 00:46:30.600684 sudo[1973]: pam_unix(sudo:session): session closed for user root Sep 13 00:46:30.635142 sshd[1970]: pam_unix(sshd:session): session closed for user core Sep 13 00:46:30.646115 systemd[1]: sshd@4-172.31.31.206:22-147.75.109.163:41914.service: Deactivated successfully. Sep 13 00:46:30.646875 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:46:30.647004 systemd[1]: session-5.scope: Consumed 5.940s CPU time. Sep 13 00:46:30.647236 systemd-logind[1731]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:46:30.648227 systemd-logind[1731]: Removed session 5. Sep 13 00:46:54.842915 env[1736]: time="2025-09-13T00:46:54.842821486Z" level=info msg="StopPodSandbox for \"a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad\"" Sep 13 00:46:54.843305 env[1736]: time="2025-09-13T00:46:54.843025303Z" level=info msg="TearDown network for sandbox \"a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad\" successfully" Sep 13 00:46:54.843305 env[1736]: time="2025-09-13T00:46:54.843079921Z" level=info msg="StopPodSandbox for \"a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad\" returns successfully" Sep 13 00:46:54.843454 env[1736]: time="2025-09-13T00:46:54.843424912Z" level=info msg="RemovePodSandbox for \"a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad\"" Sep 13 00:46:54.843511 env[1736]: time="2025-09-13T00:46:54.843459889Z" level=info msg="Forcibly stopping sandbox \"a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad\"" Sep 13 00:46:54.843548 env[1736]: time="2025-09-13T00:46:54.843529092Z" level=info msg="TearDown network for sandbox \"a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad\" successfully" Sep 13 00:46:54.851207 env[1736]: time="2025-09-13T00:46:54.851060366Z" level=info msg="RemovePodSandbox \"a97715e9ca6d3cf4bb630b819276d7886b4db73ab88e77d1e1c36bb88e50aaad\" returns successfully" Sep 13 00:47:01.201029 systemd[1]: Started sshd@5-172.31.31.206:22-147.75.109.163:42522.service. Sep 13 00:47:01.434211 sshd[4366]: Accepted publickey for core from 147.75.109.163 port 42522 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:01.437337 sshd[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:01.459411 systemd-logind[1731]: New session 6 of user core. Sep 13 00:47:01.460022 systemd[1]: Started session-6.scope. Sep 13 00:47:02.197749 sshd[4366]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:02.219873 systemd[1]: sshd@5-172.31.31.206:22-147.75.109.163:42522.service: Deactivated successfully. Sep 13 00:47:02.227426 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:47:02.233359 systemd-logind[1731]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:47:02.242135 systemd-logind[1731]: Removed session 6. Sep 13 00:47:07.221161 systemd[1]: Started sshd@6-172.31.31.206:22-147.75.109.163:42526.service. Sep 13 00:47:07.382113 sshd[4381]: Accepted publickey for core from 147.75.109.163 port 42526 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:07.384036 sshd[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:07.389621 systemd[1]: Started session-7.scope. Sep 13 00:47:07.390459 systemd-logind[1731]: New session 7 of user core. Sep 13 00:47:07.587165 sshd[4381]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:07.590174 systemd[1]: sshd@6-172.31.31.206:22-147.75.109.163:42526.service: Deactivated successfully. Sep 13 00:47:07.590951 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:47:07.591510 systemd-logind[1731]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:47:07.592487 systemd-logind[1731]: Removed session 7. Sep 13 00:47:12.613499 systemd[1]: Started sshd@7-172.31.31.206:22-147.75.109.163:38850.service. Sep 13 00:47:12.776871 sshd[4393]: Accepted publickey for core from 147.75.109.163 port 38850 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:12.778300 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:12.783388 systemd[1]: Started session-8.scope. Sep 13 00:47:12.784032 systemd-logind[1731]: New session 8 of user core. Sep 13 00:47:12.989751 sshd[4393]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:12.993367 systemd[1]: sshd@7-172.31.31.206:22-147.75.109.163:38850.service: Deactivated successfully. Sep 13 00:47:12.994152 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:47:12.994758 systemd-logind[1731]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:47:12.995559 systemd-logind[1731]: Removed session 8. Sep 13 00:47:18.013811 systemd[1]: Started sshd@8-172.31.31.206:22-147.75.109.163:38862.service. Sep 13 00:47:18.170748 sshd[4406]: Accepted publickey for core from 147.75.109.163 port 38862 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:18.172699 sshd[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:18.178489 systemd[1]: Started session-9.scope. Sep 13 00:47:18.179041 systemd-logind[1731]: New session 9 of user core. Sep 13 00:47:18.376879 sshd[4406]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:18.379745 systemd[1]: sshd@8-172.31.31.206:22-147.75.109.163:38862.service: Deactivated successfully. Sep 13 00:47:18.380899 systemd-logind[1731]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:47:18.380938 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:47:18.382101 systemd-logind[1731]: Removed session 9. Sep 13 00:47:18.401762 systemd[1]: Started sshd@9-172.31.31.206:22-147.75.109.163:38876.service. Sep 13 00:47:18.559069 sshd[4419]: Accepted publickey for core from 147.75.109.163 port 38876 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:18.560690 sshd[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:18.565825 systemd[1]: Started session-10.scope. Sep 13 00:47:18.566405 systemd-logind[1731]: New session 10 of user core. Sep 13 00:47:18.806455 sshd[4419]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:18.812396 systemd[1]: sshd@9-172.31.31.206:22-147.75.109.163:38876.service: Deactivated successfully. Sep 13 00:47:18.814105 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:47:18.814848 systemd-logind[1731]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:47:18.815697 systemd-logind[1731]: Removed session 10. Sep 13 00:47:18.830117 systemd[1]: Started sshd@10-172.31.31.206:22-147.75.109.163:38878.service. Sep 13 00:47:19.006855 sshd[4428]: Accepted publickey for core from 147.75.109.163 port 38878 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:19.010026 sshd[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:19.022680 systemd-logind[1731]: New session 11 of user core. Sep 13 00:47:19.026927 systemd[1]: Started session-11.scope. Sep 13 00:47:19.234567 sshd[4428]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:19.237939 systemd[1]: sshd@10-172.31.31.206:22-147.75.109.163:38878.service: Deactivated successfully. Sep 13 00:47:19.238708 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:47:19.239315 systemd-logind[1731]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:47:19.240164 systemd-logind[1731]: Removed session 11. Sep 13 00:47:24.260526 systemd[1]: Started sshd@11-172.31.31.206:22-147.75.109.163:44628.service. Sep 13 00:47:24.422739 sshd[4439]: Accepted publickey for core from 147.75.109.163 port 44628 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:24.424135 sshd[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:24.429078 systemd[1]: Started session-12.scope. Sep 13 00:47:24.429653 systemd-logind[1731]: New session 12 of user core. Sep 13 00:47:24.612836 sshd[4439]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:24.615594 systemd[1]: sshd@11-172.31.31.206:22-147.75.109.163:44628.service: Deactivated successfully. Sep 13 00:47:24.616345 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:47:24.616857 systemd-logind[1731]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:47:24.617742 systemd-logind[1731]: Removed session 12. Sep 13 00:47:29.639144 systemd[1]: Started sshd@12-172.31.31.206:22-147.75.109.163:44630.service. Sep 13 00:47:29.798326 sshd[4452]: Accepted publickey for core from 147.75.109.163 port 44630 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:29.800193 sshd[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:29.805319 systemd[1]: Started session-13.scope. Sep 13 00:47:29.805831 systemd-logind[1731]: New session 13 of user core. Sep 13 00:47:30.003586 sshd[4452]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:30.007590 systemd-logind[1731]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:47:30.007886 systemd[1]: sshd@12-172.31.31.206:22-147.75.109.163:44630.service: Deactivated successfully. Sep 13 00:47:30.008922 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:47:30.010108 systemd-logind[1731]: Removed session 13. Sep 13 00:47:30.028767 systemd[1]: Started sshd@13-172.31.31.206:22-147.75.109.163:54144.service. Sep 13 00:47:30.188259 sshd[4464]: Accepted publickey for core from 147.75.109.163 port 54144 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:30.189825 sshd[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:30.195515 systemd[1]: Started session-14.scope. Sep 13 00:47:30.196380 systemd-logind[1731]: New session 14 of user core. Sep 13 00:47:33.991127 sshd[4464]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:33.995244 systemd[1]: sshd@13-172.31.31.206:22-147.75.109.163:54144.service: Deactivated successfully. Sep 13 00:47:33.996385 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:47:33.996891 systemd-logind[1731]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:47:33.998013 systemd-logind[1731]: Removed session 14. Sep 13 00:47:34.018876 systemd[1]: Started sshd@14-172.31.31.206:22-147.75.109.163:54154.service. Sep 13 00:47:34.196618 sshd[4476]: Accepted publickey for core from 147.75.109.163 port 54154 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:34.198005 sshd[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:34.204445 systemd[1]: Started session-15.scope. Sep 13 00:47:34.205190 systemd-logind[1731]: New session 15 of user core. Sep 13 00:47:35.256143 sshd[4476]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:35.259974 systemd[1]: sshd@14-172.31.31.206:22-147.75.109.163:54154.service: Deactivated successfully. Sep 13 00:47:35.261044 systemd-logind[1731]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:47:35.261154 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:47:35.263547 systemd-logind[1731]: Removed session 15. Sep 13 00:47:35.280063 systemd[1]: Started sshd@15-172.31.31.206:22-147.75.109.163:54162.service. Sep 13 00:47:35.436227 sshd[4493]: Accepted publickey for core from 147.75.109.163 port 54162 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:35.437713 sshd[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:35.442622 systemd[1]: Started session-16.scope. Sep 13 00:47:35.442959 systemd-logind[1731]: New session 16 of user core. Sep 13 00:47:35.818420 sshd[4493]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:35.821848 systemd[1]: sshd@15-172.31.31.206:22-147.75.109.163:54162.service: Deactivated successfully. Sep 13 00:47:35.822814 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:47:35.822837 systemd-logind[1731]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:47:35.824190 systemd-logind[1731]: Removed session 16. Sep 13 00:47:35.845138 systemd[1]: Started sshd@16-172.31.31.206:22-147.75.109.163:54166.service. Sep 13 00:47:36.008521 sshd[4503]: Accepted publickey for core from 147.75.109.163 port 54166 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:36.010523 sshd[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:36.017505 systemd[1]: Started session-17.scope. Sep 13 00:47:36.018064 systemd-logind[1731]: New session 17 of user core. Sep 13 00:47:36.210073 sshd[4503]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:36.213701 systemd[1]: sshd@16-172.31.31.206:22-147.75.109.163:54166.service: Deactivated successfully. Sep 13 00:47:36.214418 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:47:36.214985 systemd-logind[1731]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:47:36.215861 systemd-logind[1731]: Removed session 17. Sep 13 00:47:41.234679 systemd[1]: Started sshd@17-172.31.31.206:22-147.75.109.163:57086.service. Sep 13 00:47:41.391385 sshd[4516]: Accepted publickey for core from 147.75.109.163 port 57086 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:41.392908 sshd[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:41.398697 systemd-logind[1731]: New session 18 of user core. Sep 13 00:47:41.398883 systemd[1]: Started session-18.scope. Sep 13 00:47:41.588320 sshd[4516]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:41.591463 systemd[1]: sshd@17-172.31.31.206:22-147.75.109.163:57086.service: Deactivated successfully. Sep 13 00:47:41.592227 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:47:41.593259 systemd-logind[1731]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:47:41.593994 systemd-logind[1731]: Removed session 18. Sep 13 00:47:46.614473 systemd[1]: Started sshd@18-172.31.31.206:22-147.75.109.163:57090.service. Sep 13 00:47:46.777855 sshd[4530]: Accepted publickey for core from 147.75.109.163 port 57090 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:46.779819 sshd[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:46.788195 systemd[1]: Started session-19.scope. Sep 13 00:47:46.788504 systemd-logind[1731]: New session 19 of user core. Sep 13 00:47:47.008125 sshd[4530]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:47.011912 systemd[1]: sshd@18-172.31.31.206:22-147.75.109.163:57090.service: Deactivated successfully. Sep 13 00:47:47.012891 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:47:47.013657 systemd-logind[1731]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:47:47.015853 systemd-logind[1731]: Removed session 19. Sep 13 00:47:52.032403 systemd[1]: Started sshd@19-172.31.31.206:22-147.75.109.163:54918.service. Sep 13 00:47:52.188759 sshd[4543]: Accepted publickey for core from 147.75.109.163 port 54918 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:52.190161 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:52.195213 systemd[1]: Started session-20.scope. Sep 13 00:47:52.195771 systemd-logind[1731]: New session 20 of user core. Sep 13 00:47:52.379639 sshd[4543]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:52.383510 systemd[1]: sshd@19-172.31.31.206:22-147.75.109.163:54918.service: Deactivated successfully. Sep 13 00:47:52.384463 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:47:52.385170 systemd-logind[1731]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:47:52.386174 systemd-logind[1731]: Removed session 20. Sep 13 00:47:52.405424 systemd[1]: Started sshd@20-172.31.31.206:22-147.75.109.163:54926.service. Sep 13 00:47:52.568231 sshd[4555]: Accepted publickey for core from 147.75.109.163 port 54926 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:52.568819 sshd[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:52.573774 systemd[1]: Started session-21.scope. Sep 13 00:47:52.574103 systemd-logind[1731]: New session 21 of user core. Sep 13 00:47:54.626974 env[1736]: time="2025-09-13T00:47:54.626922815Z" level=info msg="StopContainer for \"47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac\" with timeout 30 (s)" Sep 13 00:47:54.628610 env[1736]: time="2025-09-13T00:47:54.628556225Z" level=info msg="Stop container \"47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac\" with signal terminated" Sep 13 00:47:54.648192 systemd[1]: cri-containerd-47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac.scope: Deactivated successfully. Sep 13 00:47:54.673567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac-rootfs.mount: Deactivated successfully. Sep 13 00:47:54.691059 env[1736]: time="2025-09-13T00:47:54.691008583Z" level=info msg="shim disconnected" id=47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac Sep 13 00:47:54.691059 env[1736]: time="2025-09-13T00:47:54.691055229Z" level=warning msg="cleaning up after shim disconnected" id=47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac namespace=k8s.io Sep 13 00:47:54.691059 env[1736]: time="2025-09-13T00:47:54.691064841Z" level=info msg="cleaning up dead shim" Sep 13 00:47:54.702061 env[1736]: time="2025-09-13T00:47:54.702004220Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:47:54.702208 env[1736]: time="2025-09-13T00:47:54.702077472Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:47:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4598 runtime=io.containerd.runc.v2\n" Sep 13 00:47:54.706353 env[1736]: time="2025-09-13T00:47:54.706297072Z" level=info msg="StopContainer for \"47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac\" returns successfully" Sep 13 00:47:54.708166 env[1736]: time="2025-09-13T00:47:54.708136424Z" level=info msg="StopPodSandbox for \"0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c\"" Sep 13 00:47:54.708365 env[1736]: time="2025-09-13T00:47:54.708348074Z" level=info msg="Container to stop \"47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:47:54.710334 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c-shm.mount: Deactivated successfully. Sep 13 00:47:54.713702 env[1736]: time="2025-09-13T00:47:54.713670907Z" level=info msg="StopContainer for \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\" with timeout 2 (s)" Sep 13 00:47:54.713999 env[1736]: time="2025-09-13T00:47:54.713970386Z" level=info msg="Stop container \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\" with signal terminated" Sep 13 00:47:54.720314 systemd[1]: cri-containerd-0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c.scope: Deactivated successfully. Sep 13 00:47:54.756221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c-rootfs.mount: Deactivated successfully. Sep 13 00:47:54.764803 systemd-networkd[1470]: lxc_health: Link DOWN Sep 13 00:47:54.764810 systemd-networkd[1470]: lxc_health: Lost carrier Sep 13 00:47:54.777620 env[1736]: time="2025-09-13T00:47:54.777391785Z" level=info msg="shim disconnected" id=0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c Sep 13 00:47:54.777620 env[1736]: time="2025-09-13T00:47:54.777435960Z" level=warning msg="cleaning up after shim disconnected" id=0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c namespace=k8s.io Sep 13 00:47:54.777620 env[1736]: time="2025-09-13T00:47:54.777444401Z" level=info msg="cleaning up dead shim" Sep 13 00:47:54.792190 env[1736]: time="2025-09-13T00:47:54.791973037Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:47:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4638 runtime=io.containerd.runc.v2\n" Sep 13 00:47:54.792377 env[1736]: time="2025-09-13T00:47:54.792343829Z" level=info msg="TearDown network for sandbox \"0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c\" successfully" Sep 13 00:47:54.792377 env[1736]: time="2025-09-13T00:47:54.792373221Z" level=info msg="StopPodSandbox for \"0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c\" returns successfully" Sep 13 00:47:54.795822 systemd[1]: cri-containerd-e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d.scope: Deactivated successfully. Sep 13 00:47:54.796055 systemd[1]: cri-containerd-e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d.scope: Consumed 8.234s CPU time. Sep 13 00:47:54.827290 kubelet[2592]: I0913 00:47:54.826755 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70513f69-0ebc-4eb7-9df0-10b1ff3a073a-cilium-config-path\") pod \"70513f69-0ebc-4eb7-9df0-10b1ff3a073a\" (UID: \"70513f69-0ebc-4eb7-9df0-10b1ff3a073a\") " Sep 13 00:47:54.827290 kubelet[2592]: I0913 00:47:54.826801 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9lx2\" (UniqueName: \"kubernetes.io/projected/70513f69-0ebc-4eb7-9df0-10b1ff3a073a-kube-api-access-z9lx2\") pod \"70513f69-0ebc-4eb7-9df0-10b1ff3a073a\" (UID: \"70513f69-0ebc-4eb7-9df0-10b1ff3a073a\") " Sep 13 00:47:54.833737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d-rootfs.mount: Deactivated successfully. Sep 13 00:47:54.840162 kubelet[2592]: I0913 00:47:54.840114 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70513f69-0ebc-4eb7-9df0-10b1ff3a073a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "70513f69-0ebc-4eb7-9df0-10b1ff3a073a" (UID: "70513f69-0ebc-4eb7-9df0-10b1ff3a073a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:47:54.851814 env[1736]: time="2025-09-13T00:47:54.851763021Z" level=info msg="shim disconnected" id=e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d Sep 13 00:47:54.852158 env[1736]: time="2025-09-13T00:47:54.852139002Z" level=warning msg="cleaning up after shim disconnected" id=e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d namespace=k8s.io Sep 13 00:47:54.852238 env[1736]: time="2025-09-13T00:47:54.852226934Z" level=info msg="cleaning up dead shim" Sep 13 00:47:54.852457 kubelet[2592]: I0913 00:47:54.852413 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70513f69-0ebc-4eb7-9df0-10b1ff3a073a-kube-api-access-z9lx2" (OuterVolumeSpecName: "kube-api-access-z9lx2") pod "70513f69-0ebc-4eb7-9df0-10b1ff3a073a" (UID: "70513f69-0ebc-4eb7-9df0-10b1ff3a073a"). InnerVolumeSpecName "kube-api-access-z9lx2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:47:54.859085 kubelet[2592]: I0913 00:47:54.859054 2592 scope.go:117] "RemoveContainer" containerID="47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac" Sep 13 00:47:54.860725 env[1736]: time="2025-09-13T00:47:54.860684982Z" level=info msg="RemoveContainer for \"47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac\"" Sep 13 00:47:54.861563 env[1736]: time="2025-09-13T00:47:54.861538655Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:47:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4668 runtime=io.containerd.runc.v2\n" Sep 13 00:47:54.865338 env[1736]: time="2025-09-13T00:47:54.865307702Z" level=info msg="RemoveContainer for \"47a29db01c379dfdaa51475480ca78140dd5244b75ea632bdfc6b475b07afeac\" returns successfully" Sep 13 00:47:54.866986 env[1736]: time="2025-09-13T00:47:54.866934177Z" level=info msg="StopContainer for \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\" returns successfully" Sep 13 00:47:54.867345 env[1736]: time="2025-09-13T00:47:54.867305655Z" level=info msg="StopPodSandbox for \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\"" Sep 13 00:47:54.867494 env[1736]: time="2025-09-13T00:47:54.867474565Z" level=info msg="Container to stop \"89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:47:54.867580 env[1736]: time="2025-09-13T00:47:54.867566840Z" level=info msg="Container to stop \"cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:47:54.867704 env[1736]: time="2025-09-13T00:47:54.867687890Z" level=info msg="Container to stop \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:47:54.868099 env[1736]: time="2025-09-13T00:47:54.867776915Z" level=info msg="Container to stop \"3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:47:54.868099 env[1736]: time="2025-09-13T00:47:54.867789973Z" level=info msg="Container to stop \"6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:47:54.870637 env[1736]: time="2025-09-13T00:47:54.870610896Z" level=info msg="StopPodSandbox for \"0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c\"" Sep 13 00:47:54.870808 env[1736]: time="2025-09-13T00:47:54.870775056Z" level=info msg="TearDown network for sandbox \"0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c\" successfully" Sep 13 00:47:54.870958 env[1736]: time="2025-09-13T00:47:54.870866884Z" level=info msg="StopPodSandbox for \"0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c\" returns successfully" Sep 13 00:47:54.872895 env[1736]: time="2025-09-13T00:47:54.871994814Z" level=info msg="RemovePodSandbox for \"0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c\"" Sep 13 00:47:54.872895 env[1736]: time="2025-09-13T00:47:54.872148025Z" level=info msg="Forcibly stopping sandbox \"0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c\"" Sep 13 00:47:54.872895 env[1736]: time="2025-09-13T00:47:54.872336456Z" level=info msg="TearDown network for sandbox \"0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c\" successfully" Sep 13 00:47:54.872636 systemd[1]: Removed slice kubepods-besteffort-pod70513f69_0ebc_4eb7_9df0_10b1ff3a073a.slice. Sep 13 00:47:54.878023 systemd[1]: cri-containerd-3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33.scope: Deactivated successfully. Sep 13 00:47:54.879088 env[1736]: time="2025-09-13T00:47:54.878554744Z" level=info msg="RemovePodSandbox \"0dc150fd85820c99fd5dc13c312941438cfaf8605b1011f9c67f6a2baed2609c\" returns successfully" Sep 13 00:47:54.924277 env[1736]: time="2025-09-13T00:47:54.924214093Z" level=info msg="shim disconnected" id=3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33 Sep 13 00:47:54.924277 env[1736]: time="2025-09-13T00:47:54.924263249Z" level=warning msg="cleaning up after shim disconnected" id=3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33 namespace=k8s.io Sep 13 00:47:54.924277 env[1736]: time="2025-09-13T00:47:54.924273118Z" level=info msg="cleaning up dead shim" Sep 13 00:47:54.928270 kubelet[2592]: I0913 00:47:54.928238 2592 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z9lx2\" (UniqueName: \"kubernetes.io/projected/70513f69-0ebc-4eb7-9df0-10b1ff3a073a-kube-api-access-z9lx2\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:54.928270 kubelet[2592]: I0913 00:47:54.928266 2592 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70513f69-0ebc-4eb7-9df0-10b1ff3a073a-cilium-config-path\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:54.934456 env[1736]: time="2025-09-13T00:47:54.934390013Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:47:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4702 runtime=io.containerd.runc.v2\n" Sep 13 00:47:54.934752 env[1736]: time="2025-09-13T00:47:54.934724549Z" level=info msg="TearDown network for sandbox \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\" successfully" Sep 13 00:47:54.934808 env[1736]: time="2025-09-13T00:47:54.934749987Z" level=info msg="StopPodSandbox for \"3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33\" returns successfully" Sep 13 00:47:54.967365 kubelet[2592]: E0913 00:47:54.967313 2592 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:47:55.029215 kubelet[2592]: I0913 00:47:55.029145 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cilium-config-path\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.029215 kubelet[2592]: I0913 00:47:55.029193 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cni-path\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.029215 kubelet[2592]: I0913 00:47:55.029214 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-clustermesh-secrets\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.029496 kubelet[2592]: I0913 00:47:55.029231 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-host-proc-sys-kernel\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.029496 kubelet[2592]: I0913 00:47:55.029245 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cilium-cgroup\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.029496 kubelet[2592]: I0913 00:47:55.029261 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-lib-modules\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.029496 kubelet[2592]: I0913 00:47:55.029278 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cilium-run\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.029496 kubelet[2592]: I0913 00:47:55.029291 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-xtables-lock\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.029496 kubelet[2592]: I0913 00:47:55.029307 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-hubble-tls\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.029496 kubelet[2592]: I0913 00:47:55.029322 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-bpf-maps\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.029496 kubelet[2592]: I0913 00:47:55.029336 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-etc-cni-netd\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.029496 kubelet[2592]: I0913 00:47:55.029350 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-hostproc\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.029496 kubelet[2592]: I0913 00:47:55.029364 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-host-proc-sys-net\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.029496 kubelet[2592]: I0913 00:47:55.029385 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl7gk\" (UniqueName: \"kubernetes.io/projected/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-kube-api-access-jl7gk\") pod \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\" (UID: \"ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5\") " Sep 13 00:47:55.032259 kubelet[2592]: I0913 00:47:55.029904 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:55.032259 kubelet[2592]: I0913 00:47:55.032018 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:47:55.032259 kubelet[2592]: I0913 00:47:55.032069 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cni-path" (OuterVolumeSpecName: "cni-path") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:55.032668 kubelet[2592]: I0913 00:47:55.032637 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:55.034406 kubelet[2592]: I0913 00:47:55.034376 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:55.034523 kubelet[2592]: I0913 00:47:55.034432 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:55.034523 kubelet[2592]: I0913 00:47:55.034460 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:55.034523 kubelet[2592]: I0913 00:47:55.034491 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:55.034523 kubelet[2592]: I0913 00:47:55.034511 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:55.034812 kubelet[2592]: I0913 00:47:55.034526 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-hostproc" (OuterVolumeSpecName: "hostproc") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:55.034812 kubelet[2592]: I0913 00:47:55.034541 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:55.034812 kubelet[2592]: I0913 00:47:55.034773 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-kube-api-access-jl7gk" (OuterVolumeSpecName: "kube-api-access-jl7gk") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "kube-api-access-jl7gk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:47:55.036842 kubelet[2592]: I0913 00:47:55.036807 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:47:55.038529 kubelet[2592]: I0913 00:47:55.038469 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" (UID: "ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:47:55.129974 kubelet[2592]: I0913 00:47:55.129864 2592 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-clustermesh-secrets\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.130416 kubelet[2592]: I0913 00:47:55.130401 2592 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-host-proc-sys-kernel\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.130555 kubelet[2592]: I0913 00:47:55.130546 2592 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cilium-cgroup\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.130753 kubelet[2592]: I0913 00:47:55.130743 2592 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-lib-modules\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.130965 kubelet[2592]: I0913 00:47:55.130950 2592 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cilium-run\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.131093 kubelet[2592]: I0913 00:47:55.131084 2592 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-xtables-lock\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.131257 kubelet[2592]: I0913 00:47:55.131234 2592 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-hubble-tls\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.131257 kubelet[2592]: I0913 00:47:55.131253 2592 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-bpf-maps\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.131257 kubelet[2592]: I0913 00:47:55.131262 2592 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-etc-cni-netd\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.131418 kubelet[2592]: I0913 00:47:55.131303 2592 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-hostproc\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.131418 kubelet[2592]: I0913 00:47:55.131312 2592 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-host-proc-sys-net\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.131418 kubelet[2592]: I0913 00:47:55.131320 2592 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jl7gk\" (UniqueName: \"kubernetes.io/projected/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-kube-api-access-jl7gk\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.131418 kubelet[2592]: I0913 00:47:55.131330 2592 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cilium-config-path\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.131418 kubelet[2592]: I0913 00:47:55.131339 2592 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5-cni-path\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:55.223919 kubelet[2592]: I0913 00:47:55.223890 2592 scope.go:117] "RemoveContainer" containerID="e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d" Sep 13 00:47:55.229549 env[1736]: time="2025-09-13T00:47:55.229512704Z" level=info msg="RemoveContainer for \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\"" Sep 13 00:47:55.234219 systemd[1]: Removed slice kubepods-burstable-podef130339_b9b2_4c11_bf34_8fe5bc1ff2c5.slice. Sep 13 00:47:55.234298 systemd[1]: kubepods-burstable-podef130339_b9b2_4c11_bf34_8fe5bc1ff2c5.slice: Consumed 8.366s CPU time. Sep 13 00:47:55.235356 env[1736]: time="2025-09-13T00:47:55.235136440Z" level=info msg="RemoveContainer for \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\" returns successfully" Sep 13 00:47:55.235772 kubelet[2592]: I0913 00:47:55.235749 2592 scope.go:117] "RemoveContainer" containerID="cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25" Sep 13 00:47:55.238449 env[1736]: time="2025-09-13T00:47:55.238117613Z" level=info msg="RemoveContainer for \"cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25\"" Sep 13 00:47:55.243473 env[1736]: time="2025-09-13T00:47:55.243430713Z" level=info msg="RemoveContainer for \"cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25\" returns successfully" Sep 13 00:47:55.243675 kubelet[2592]: I0913 00:47:55.243648 2592 scope.go:117] "RemoveContainer" containerID="6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad" Sep 13 00:47:55.245035 env[1736]: time="2025-09-13T00:47:55.245002734Z" level=info msg="RemoveContainer for \"6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad\"" Sep 13 00:47:55.250444 env[1736]: time="2025-09-13T00:47:55.250402404Z" level=info msg="RemoveContainer for \"6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad\" returns successfully" Sep 13 00:47:55.250677 kubelet[2592]: I0913 00:47:55.250638 2592 scope.go:117] "RemoveContainer" containerID="89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5" Sep 13 00:47:55.252494 env[1736]: time="2025-09-13T00:47:55.252456886Z" level=info msg="RemoveContainer for \"89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5\"" Sep 13 00:47:55.259816 env[1736]: time="2025-09-13T00:47:55.259672288Z" level=info msg="RemoveContainer for \"89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5\" returns successfully" Sep 13 00:47:55.259974 kubelet[2592]: I0913 00:47:55.259938 2592 scope.go:117] "RemoveContainer" containerID="3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f" Sep 13 00:47:55.262243 env[1736]: time="2025-09-13T00:47:55.261811918Z" level=info msg="RemoveContainer for \"3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f\"" Sep 13 00:47:55.267270 env[1736]: time="2025-09-13T00:47:55.267218626Z" level=info msg="RemoveContainer for \"3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f\" returns successfully" Sep 13 00:47:55.267526 kubelet[2592]: I0913 00:47:55.267477 2592 scope.go:117] "RemoveContainer" containerID="e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d" Sep 13 00:47:55.267777 env[1736]: time="2025-09-13T00:47:55.267701411Z" level=error msg="ContainerStatus for \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\": not found" Sep 13 00:47:55.271427 kubelet[2592]: E0913 00:47:55.271388 2592 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\": not found" containerID="e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d" Sep 13 00:47:55.276540 kubelet[2592]: I0913 00:47:55.271689 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d"} err="failed to get container status \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2883cfc529ee9f1773fb74ca8bd0433c4fa89a071b1f520911e7636d78e3d7d\": not found" Sep 13 00:47:55.276540 kubelet[2592]: I0913 00:47:55.276543 2592 scope.go:117] "RemoveContainer" containerID="cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25" Sep 13 00:47:55.277016 env[1736]: time="2025-09-13T00:47:55.276941796Z" level=error msg="ContainerStatus for \"cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25\": not found" Sep 13 00:47:55.277241 kubelet[2592]: E0913 00:47:55.277217 2592 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25\": not found" containerID="cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25" Sep 13 00:47:55.277751 kubelet[2592]: I0913 00:47:55.277245 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25"} err="failed to get container status \"cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbbd74f8d2af0cb3bc6cc47b1a05d45ddd0a5103b34feca20a0c2fbe05065d25\": not found" Sep 13 00:47:55.277751 kubelet[2592]: I0913 00:47:55.277265 2592 scope.go:117] "RemoveContainer" containerID="6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad" Sep 13 00:47:55.277751 kubelet[2592]: E0913 00:47:55.277624 2592 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad\": not found" containerID="6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad" Sep 13 00:47:55.277751 kubelet[2592]: I0913 00:47:55.277661 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad"} err="failed to get container status \"6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad\": rpc error: code = NotFound desc = an error occurred when try to find container \"6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad\": not found" Sep 13 00:47:55.277751 kubelet[2592]: I0913 00:47:55.277700 2592 scope.go:117] "RemoveContainer" containerID="89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5" Sep 13 00:47:55.277903 env[1736]: time="2025-09-13T00:47:55.277457727Z" level=error msg="ContainerStatus for \"6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6360f68fd7f1394e973763306c2e1ff98fbaa06a29916a466774214a07753dad\": not found" Sep 13 00:47:55.278105 env[1736]: time="2025-09-13T00:47:55.278056704Z" level=error msg="ContainerStatus for \"89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5\": not found" Sep 13 00:47:55.278214 kubelet[2592]: E0913 00:47:55.278193 2592 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5\": not found" containerID="89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5" Sep 13 00:47:55.278260 kubelet[2592]: I0913 00:47:55.278216 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5"} err="failed to get container status \"89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"89b4b34fca51f41b9fddf08c2c4b5d439c34404c0a65f3c94a3cc2e9b8ede4d5\": not found" Sep 13 00:47:55.278260 kubelet[2592]: I0913 00:47:55.278230 2592 scope.go:117] "RemoveContainer" containerID="3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f" Sep 13 00:47:55.278403 env[1736]: time="2025-09-13T00:47:55.278359879Z" level=error msg="ContainerStatus for \"3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f\": not found" Sep 13 00:47:55.278488 kubelet[2592]: E0913 00:47:55.278468 2592 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f\": not found" containerID="3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f" Sep 13 00:47:55.278523 kubelet[2592]: I0913 00:47:55.278492 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f"} err="failed to get container status \"3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3428b6c2b759ca9e32d3cf74422e6ce3cdc34b9daede7bd285e299c429f4f25f\": not found" Sep 13 00:47:55.614461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33-rootfs.mount: Deactivated successfully. Sep 13 00:47:55.614584 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ae355ddea01bd37ee715dcc888705a3ca90b498322b56b02d40476ef34c8d33-shm.mount: Deactivated successfully. Sep 13 00:47:55.614661 systemd[1]: var-lib-kubelet-pods-ef130339\x2db9b2\x2d4c11\x2dbf34\x2d8fe5bc1ff2c5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djl7gk.mount: Deactivated successfully. Sep 13 00:47:55.614720 systemd[1]: var-lib-kubelet-pods-ef130339\x2db9b2\x2d4c11\x2dbf34\x2d8fe5bc1ff2c5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:47:55.614775 systemd[1]: var-lib-kubelet-pods-ef130339\x2db9b2\x2d4c11\x2dbf34\x2d8fe5bc1ff2c5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:47:55.614838 systemd[1]: var-lib-kubelet-pods-70513f69\x2d0ebc\x2d4eb7\x2d9df0\x2d10b1ff3a073a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz9lx2.mount: Deactivated successfully. Sep 13 00:47:56.597907 systemd[1]: Started sshd@21-172.31.31.206:22-147.75.109.163:54940.service. Sep 13 00:47:56.602766 sshd[4555]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:56.606968 systemd[1]: sshd@20-172.31.31.206:22-147.75.109.163:54926.service: Deactivated successfully. Sep 13 00:47:56.607762 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:47:56.609889 systemd-logind[1731]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:47:56.611492 systemd-logind[1731]: Removed session 21. Sep 13 00:47:56.773731 sshd[4720]: Accepted publickey for core from 147.75.109.163 port 54940 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:56.775359 sshd[4720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:56.780698 systemd[1]: Started session-22.scope. Sep 13 00:47:56.780725 systemd-logind[1731]: New session 22 of user core. Sep 13 00:47:56.866517 kubelet[2592]: E0913 00:47:56.865172 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-bvj78" podUID="25fa21c5-6905-44c2-ad8d-80f655ad9b6f" Sep 13 00:47:56.868109 kubelet[2592]: I0913 00:47:56.868077 2592 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70513f69-0ebc-4eb7-9df0-10b1ff3a073a" path="/var/lib/kubelet/pods/70513f69-0ebc-4eb7-9df0-10b1ff3a073a/volumes" Sep 13 00:47:56.869553 kubelet[2592]: I0913 00:47:56.869516 2592 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5" path="/var/lib/kubelet/pods/ef130339-b9b2-4c11-bf34-8fe5bc1ff2c5/volumes" Sep 13 00:47:57.518634 sshd[4720]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:57.521892 systemd[1]: sshd@21-172.31.31.206:22-147.75.109.163:54940.service: Deactivated successfully. Sep 13 00:47:57.522772 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:47:57.523738 systemd-logind[1731]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:47:57.524621 systemd-logind[1731]: Removed session 22. Sep 13 00:47:57.545161 systemd[1]: Started sshd@22-172.31.31.206:22-147.75.109.163:54954.service. Sep 13 00:47:57.704414 systemd[1]: Created slice kubepods-burstable-pod02f9ba0a_791c_4033_a146_3aef6a4af43e.slice. Sep 13 00:47:57.717172 sshd[4731]: Accepted publickey for core from 147.75.109.163 port 54954 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:57.718522 sshd[4731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:57.723250 systemd-logind[1731]: New session 23 of user core. Sep 13 00:47:57.723464 systemd[1]: Started session-23.scope. Sep 13 00:47:57.744271 kubelet[2592]: I0913 00:47:57.744198 2592 setters.go:618] "Node became not ready" node="ip-172-31-31-206" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:47:57Z","lastTransitionTime":"2025-09-13T00:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:47:57.750462 kubelet[2592]: I0913 00:47:57.750396 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-etc-cni-netd\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.750743 kubelet[2592]: I0913 00:47:57.750721 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-xtables-lock\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.750936 kubelet[2592]: I0913 00:47:57.750917 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-ipsec-secrets\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.751055 kubelet[2592]: I0913 00:47:57.751039 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-host-proc-sys-kernel\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.751158 kubelet[2592]: I0913 00:47:57.751143 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-run\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.751247 kubelet[2592]: I0913 00:47:57.751231 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02f9ba0a-791c-4033-a146-3aef6a4af43e-hubble-tls\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.751328 kubelet[2592]: I0913 00:47:57.751314 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02f9ba0a-791c-4033-a146-3aef6a4af43e-clustermesh-secrets\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.751407 kubelet[2592]: I0913 00:47:57.751392 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-bpf-maps\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.751494 kubelet[2592]: I0913 00:47:57.751478 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-cgroup\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.751571 kubelet[2592]: I0913 00:47:57.751557 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt4nv\" (UniqueName: \"kubernetes.io/projected/02f9ba0a-791c-4033-a146-3aef6a4af43e-kube-api-access-dt4nv\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.751701 kubelet[2592]: I0913 00:47:57.751685 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-cni-path\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.751801 kubelet[2592]: I0913 00:47:57.751786 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-lib-modules\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.751885 kubelet[2592]: I0913 00:47:57.751868 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-config-path\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.751978 kubelet[2592]: I0913 00:47:57.751965 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-hostproc\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:57.752063 kubelet[2592]: I0913 00:47:57.752053 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-host-proc-sys-net\") pod \"cilium-8qdwn\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " pod="kube-system/cilium-8qdwn" Sep 13 00:47:58.010390 env[1736]: time="2025-09-13T00:47:58.010335800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qdwn,Uid:02f9ba0a-791c-4033-a146-3aef6a4af43e,Namespace:kube-system,Attempt:0,}" Sep 13 00:47:58.011571 sshd[4731]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:58.018566 systemd-logind[1731]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:47:58.020864 systemd[1]: sshd@22-172.31.31.206:22-147.75.109.163:54954.service: Deactivated successfully. Sep 13 00:47:58.021907 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:47:58.023875 systemd-logind[1731]: Removed session 23. Sep 13 00:47:58.044022 systemd[1]: Started sshd@23-172.31.31.206:22-147.75.109.163:54958.service. Sep 13 00:47:58.056019 env[1736]: time="2025-09-13T00:47:58.055941488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:47:58.058801 env[1736]: time="2025-09-13T00:47:58.058728886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:47:58.059011 env[1736]: time="2025-09-13T00:47:58.058983540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:47:58.059370 env[1736]: time="2025-09-13T00:47:58.059329185Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92fb26800066ace3b8769947059fe1800225fe9be0f657be022a8038f9e16df9 pid=4755 runtime=io.containerd.runc.v2 Sep 13 00:47:58.083141 systemd[1]: Started cri-containerd-92fb26800066ace3b8769947059fe1800225fe9be0f657be022a8038f9e16df9.scope. Sep 13 00:47:58.119712 env[1736]: time="2025-09-13T00:47:58.119662038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qdwn,Uid:02f9ba0a-791c-4033-a146-3aef6a4af43e,Namespace:kube-system,Attempt:0,} returns sandbox id \"92fb26800066ace3b8769947059fe1800225fe9be0f657be022a8038f9e16df9\"" Sep 13 00:47:58.128089 env[1736]: time="2025-09-13T00:47:58.128050896Z" level=info msg="CreateContainer within sandbox \"92fb26800066ace3b8769947059fe1800225fe9be0f657be022a8038f9e16df9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:47:58.150023 env[1736]: time="2025-09-13T00:47:58.149962698Z" level=info msg="CreateContainer within sandbox \"92fb26800066ace3b8769947059fe1800225fe9be0f657be022a8038f9e16df9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4\"" Sep 13 00:47:58.152220 env[1736]: time="2025-09-13T00:47:58.151029726Z" level=info msg="StartContainer for \"d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4\"" Sep 13 00:47:58.171073 systemd[1]: Started cri-containerd-d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4.scope. Sep 13 00:47:58.185426 systemd[1]: cri-containerd-d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4.scope: Deactivated successfully. Sep 13 00:47:58.209762 env[1736]: time="2025-09-13T00:47:58.209694776Z" level=info msg="shim disconnected" id=d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4 Sep 13 00:47:58.209762 env[1736]: time="2025-09-13T00:47:58.209764331Z" level=warning msg="cleaning up after shim disconnected" id=d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4 namespace=k8s.io Sep 13 00:47:58.209762 env[1736]: time="2025-09-13T00:47:58.209777879Z" level=info msg="cleaning up dead shim" Sep 13 00:47:58.220015 env[1736]: time="2025-09-13T00:47:58.219963116Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:47:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4814 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T00:47:58Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 00:47:58.220332 env[1736]: time="2025-09-13T00:47:58.220263587Z" level=error msg="copy shim log" error="read /proc/self/fd/34: file already closed" Sep 13 00:47:58.220708 env[1736]: time="2025-09-13T00:47:58.220663541Z" level=error msg="Failed to pipe stdout of container \"d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4\"" error="reading from a closed fifo" Sep 13 00:47:58.220822 env[1736]: time="2025-09-13T00:47:58.220663496Z" level=error msg="Failed to pipe stderr of container \"d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4\"" error="reading from a closed fifo" Sep 13 00:47:58.223987 sshd[4752]: Accepted publickey for core from 147.75.109.163 port 54958 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:58.224324 env[1736]: time="2025-09-13T00:47:58.223931483Z" level=error msg="StartContainer for \"d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 00:47:58.224386 kubelet[2592]: E0913 00:47:58.224193 2592 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4" Sep 13 00:47:58.227508 sshd[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:58.231842 kubelet[2592]: E0913 00:47:58.231798 2592 kuberuntime_manager.go:1358] "Unhandled Error" err=< Sep 13 00:47:58.231842 kubelet[2592]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 00:47:58.231842 kubelet[2592]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 00:47:58.231842 kubelet[2592]: rm /hostbin/cilium-mount Sep 13 00:47:58.231842 kubelet[2592]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dt4nv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-8qdwn_kube-system(02f9ba0a-791c-4033-a146-3aef6a4af43e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 00:47:58.231842 kubelet[2592]: > logger="UnhandledError" Sep 13 00:47:58.240824 kubelet[2592]: E0913 00:47:58.232904 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8qdwn" podUID="02f9ba0a-791c-4033-a146-3aef6a4af43e" Sep 13 00:47:58.236070 systemd[1]: Started session-24.scope. Sep 13 00:47:58.241147 env[1736]: time="2025-09-13T00:47:58.235982312Z" level=info msg="StopPodSandbox for \"92fb26800066ace3b8769947059fe1800225fe9be0f657be022a8038f9e16df9\"" Sep 13 00:47:58.241147 env[1736]: time="2025-09-13T00:47:58.236046975Z" level=info msg="Container to stop \"d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:47:58.240271 systemd-logind[1731]: New session 24 of user core. Sep 13 00:47:58.247279 systemd[1]: cri-containerd-92fb26800066ace3b8769947059fe1800225fe9be0f657be022a8038f9e16df9.scope: Deactivated successfully. Sep 13 00:47:58.317974 env[1736]: time="2025-09-13T00:47:58.317687111Z" level=info msg="shim disconnected" id=92fb26800066ace3b8769947059fe1800225fe9be0f657be022a8038f9e16df9 Sep 13 00:47:58.318329 env[1736]: time="2025-09-13T00:47:58.318301374Z" level=warning msg="cleaning up after shim disconnected" id=92fb26800066ace3b8769947059fe1800225fe9be0f657be022a8038f9e16df9 namespace=k8s.io Sep 13 00:47:58.318432 env[1736]: time="2025-09-13T00:47:58.318417380Z" level=info msg="cleaning up dead shim" Sep 13 00:47:58.335194 env[1736]: time="2025-09-13T00:47:58.335137773Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:47:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4846 runtime=io.containerd.runc.v2\n" Sep 13 00:47:58.335771 env[1736]: time="2025-09-13T00:47:58.335733495Z" level=info msg="TearDown network for sandbox \"92fb26800066ace3b8769947059fe1800225fe9be0f657be022a8038f9e16df9\" successfully" Sep 13 00:47:58.335929 env[1736]: time="2025-09-13T00:47:58.335907321Z" level=info msg="StopPodSandbox for \"92fb26800066ace3b8769947059fe1800225fe9be0f657be022a8038f9e16df9\" returns successfully" Sep 13 00:47:58.370073 kubelet[2592]: I0913 00:47:58.370032 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-bpf-maps\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.370405 kubelet[2592]: I0913 00:47:58.370378 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02f9ba0a-791c-4033-a146-3aef6a4af43e-clustermesh-secrets\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.370685 kubelet[2592]: I0913 00:47:58.370668 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-cni-path\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.370834 kubelet[2592]: I0913 00:47:58.370809 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-host-proc-sys-kernel\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.370956 kubelet[2592]: I0913 00:47:58.370943 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-xtables-lock\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.371074 kubelet[2592]: I0913 00:47:58.371061 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-lib-modules\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.371192 kubelet[2592]: I0913 00:47:58.371178 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02f9ba0a-791c-4033-a146-3aef6a4af43e-hubble-tls\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.371308 kubelet[2592]: I0913 00:47:58.371295 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-cgroup\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.371422 kubelet[2592]: I0913 00:47:58.371408 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt4nv\" (UniqueName: \"kubernetes.io/projected/02f9ba0a-791c-4033-a146-3aef6a4af43e-kube-api-access-dt4nv\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.371546 kubelet[2592]: I0913 00:47:58.371533 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-config-path\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.371673 kubelet[2592]: I0913 00:47:58.371659 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-etc-cni-netd\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.371837 kubelet[2592]: I0913 00:47:58.371788 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-ipsec-secrets\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.371982 kubelet[2592]: I0913 00:47:58.371819 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-host-proc-sys-net\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.372128 kubelet[2592]: I0913 00:47:58.371967 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-run\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.372128 kubelet[2592]: I0913 00:47:58.372084 2592 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-hostproc\") pod \"02f9ba0a-791c-4033-a146-3aef6a4af43e\" (UID: \"02f9ba0a-791c-4033-a146-3aef6a4af43e\") " Sep 13 00:47:58.372403 kubelet[2592]: I0913 00:47:58.372303 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-hostproc" (OuterVolumeSpecName: "hostproc") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:58.372403 kubelet[2592]: I0913 00:47:58.372359 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:58.375634 kubelet[2592]: I0913 00:47:58.375583 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:58.375829 kubelet[2592]: I0913 00:47:58.375811 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-cni-path" (OuterVolumeSpecName: "cni-path") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:58.376105 kubelet[2592]: I0913 00:47:58.376084 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:58.376216 kubelet[2592]: I0913 00:47:58.376201 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:58.376300 kubelet[2592]: I0913 00:47:58.376288 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:58.384016 kubelet[2592]: I0913 00:47:58.383970 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:58.384223 kubelet[2592]: I0913 00:47:58.384204 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:58.384429 kubelet[2592]: I0913 00:47:58.384404 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f9ba0a-791c-4033-a146-3aef6a4af43e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:47:58.393634 kubelet[2592]: I0913 00:47:58.390662 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:47:58.393634 kubelet[2592]: I0913 00:47:58.390785 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f9ba0a-791c-4033-a146-3aef6a4af43e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:47:58.397131 kubelet[2592]: I0913 00:47:58.395052 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:47:58.400124 kubelet[2592]: I0913 00:47:58.400072 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:47:58.409087 kubelet[2592]: I0913 00:47:58.409041 2592 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f9ba0a-791c-4033-a146-3aef6a4af43e-kube-api-access-dt4nv" (OuterVolumeSpecName: "kube-api-access-dt4nv") pod "02f9ba0a-791c-4033-a146-3aef6a4af43e" (UID: "02f9ba0a-791c-4033-a146-3aef6a4af43e"). InnerVolumeSpecName "kube-api-access-dt4nv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:47:58.472834 kubelet[2592]: I0913 00:47:58.472797 2592 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-host-proc-sys-kernel\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.473053 kubelet[2592]: I0913 00:47:58.473041 2592 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-xtables-lock\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.473156 kubelet[2592]: I0913 00:47:58.473145 2592 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-lib-modules\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.473245 kubelet[2592]: I0913 00:47:58.473235 2592 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02f9ba0a-791c-4033-a146-3aef6a4af43e-hubble-tls\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.473332 kubelet[2592]: I0913 00:47:58.473321 2592 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-cgroup\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.473433 kubelet[2592]: I0913 00:47:58.473422 2592 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dt4nv\" (UniqueName: \"kubernetes.io/projected/02f9ba0a-791c-4033-a146-3aef6a4af43e-kube-api-access-dt4nv\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.473530 kubelet[2592]: I0913 00:47:58.473518 2592 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-config-path\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.473632 kubelet[2592]: I0913 00:47:58.473622 2592 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-etc-cni-netd\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.473732 kubelet[2592]: I0913 00:47:58.473722 2592 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-ipsec-secrets\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.473825 kubelet[2592]: I0913 00:47:58.473815 2592 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-host-proc-sys-net\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.473917 kubelet[2592]: I0913 00:47:58.473908 2592 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-cilium-run\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.474007 kubelet[2592]: I0913 00:47:58.473996 2592 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-hostproc\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.474096 kubelet[2592]: I0913 00:47:58.474086 2592 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-bpf-maps\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.474187 kubelet[2592]: I0913 00:47:58.474177 2592 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02f9ba0a-791c-4033-a146-3aef6a4af43e-clustermesh-secrets\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.474276 kubelet[2592]: I0913 00:47:58.474267 2592 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02f9ba0a-791c-4033-a146-3aef6a4af43e-cni-path\") on node \"ip-172-31-31-206\" DevicePath \"\"" Sep 13 00:47:58.860431 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92fb26800066ace3b8769947059fe1800225fe9be0f657be022a8038f9e16df9-shm.mount: Deactivated successfully. Sep 13 00:47:58.860545 systemd[1]: var-lib-kubelet-pods-02f9ba0a\x2d791c\x2d4033\x2da146\x2d3aef6a4af43e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:47:58.860624 systemd[1]: var-lib-kubelet-pods-02f9ba0a\x2d791c\x2d4033\x2da146\x2d3aef6a4af43e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddt4nv.mount: Deactivated successfully. Sep 13 00:47:58.860689 systemd[1]: var-lib-kubelet-pods-02f9ba0a\x2d791c\x2d4033\x2da146\x2d3aef6a4af43e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:47:58.860745 systemd[1]: var-lib-kubelet-pods-02f9ba0a\x2d791c\x2d4033\x2da146\x2d3aef6a4af43e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:47:58.866097 kubelet[2592]: E0913 00:47:58.866055 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-bvj78" podUID="25fa21c5-6905-44c2-ad8d-80f655ad9b6f" Sep 13 00:47:58.871110 systemd[1]: Removed slice kubepods-burstable-pod02f9ba0a_791c_4033_a146_3aef6a4af43e.slice. Sep 13 00:47:59.238865 kubelet[2592]: I0913 00:47:59.238452 2592 scope.go:117] "RemoveContainer" containerID="d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4" Sep 13 00:47:59.241460 env[1736]: time="2025-09-13T00:47:59.241176835Z" level=info msg="RemoveContainer for \"d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4\"" Sep 13 00:47:59.246341 env[1736]: time="2025-09-13T00:47:59.246224675Z" level=info msg="RemoveContainer for \"d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4\" returns successfully" Sep 13 00:47:59.297676 systemd[1]: Created slice kubepods-burstable-poda5a83c3b_539b_49c9_9551_187c85fc553d.slice. Sep 13 00:47:59.378005 kubelet[2592]: I0913 00:47:59.377961 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5a83c3b-539b-49c9-9551-187c85fc553d-cilium-config-path\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.378196 kubelet[2592]: I0913 00:47:59.378180 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a5a83c3b-539b-49c9-9551-187c85fc553d-cilium-ipsec-secrets\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.378303 kubelet[2592]: I0913 00:47:59.378292 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5a83c3b-539b-49c9-9551-187c85fc553d-lib-modules\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.378396 kubelet[2592]: I0913 00:47:59.378386 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a5a83c3b-539b-49c9-9551-187c85fc553d-host-proc-sys-net\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.378486 kubelet[2592]: I0913 00:47:59.378477 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a5a83c3b-539b-49c9-9551-187c85fc553d-host-proc-sys-kernel\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.378561 kubelet[2592]: I0913 00:47:59.378552 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a5a83c3b-539b-49c9-9551-187c85fc553d-clustermesh-secrets\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.378661 kubelet[2592]: I0913 00:47:59.378650 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a5a83c3b-539b-49c9-9551-187c85fc553d-hostproc\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.378746 kubelet[2592]: I0913 00:47:59.378733 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a5a83c3b-539b-49c9-9551-187c85fc553d-cilium-cgroup\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.378934 kubelet[2592]: I0913 00:47:59.378921 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5a83c3b-539b-49c9-9551-187c85fc553d-xtables-lock\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.379020 kubelet[2592]: I0913 00:47:59.379010 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a5a83c3b-539b-49c9-9551-187c85fc553d-bpf-maps\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.380086 kubelet[2592]: I0913 00:47:59.379112 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a5a83c3b-539b-49c9-9551-187c85fc553d-hubble-tls\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.380086 kubelet[2592]: I0913 00:47:59.379129 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xsxc\" (UniqueName: \"kubernetes.io/projected/a5a83c3b-539b-49c9-9551-187c85fc553d-kube-api-access-5xsxc\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.380086 kubelet[2592]: I0913 00:47:59.379148 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a5a83c3b-539b-49c9-9551-187c85fc553d-cilium-run\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.380086 kubelet[2592]: I0913 00:47:59.379176 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a5a83c3b-539b-49c9-9551-187c85fc553d-cni-path\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.380086 kubelet[2592]: I0913 00:47:59.379192 2592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5a83c3b-539b-49c9-9551-187c85fc553d-etc-cni-netd\") pod \"cilium-rrfq5\" (UID: \"a5a83c3b-539b-49c9-9551-187c85fc553d\") " pod="kube-system/cilium-rrfq5" Sep 13 00:47:59.600617 env[1736]: time="2025-09-13T00:47:59.600533698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rrfq5,Uid:a5a83c3b-539b-49c9-9551-187c85fc553d,Namespace:kube-system,Attempt:0,}" Sep 13 00:47:59.621991 env[1736]: time="2025-09-13T00:47:59.621888150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:47:59.621991 env[1736]: time="2025-09-13T00:47:59.621933262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:47:59.621991 env[1736]: time="2025-09-13T00:47:59.621944288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:47:59.622429 env[1736]: time="2025-09-13T00:47:59.622378741Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8fdebea16e1a2d2b0a7be5e2d565f893affc1f6b0ce512b23ce8772da00a27e8 pid=4879 runtime=io.containerd.runc.v2 Sep 13 00:47:59.635241 systemd[1]: Started cri-containerd-8fdebea16e1a2d2b0a7be5e2d565f893affc1f6b0ce512b23ce8772da00a27e8.scope. Sep 13 00:47:59.663550 env[1736]: time="2025-09-13T00:47:59.663491471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rrfq5,Uid:a5a83c3b-539b-49c9-9551-187c85fc553d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fdebea16e1a2d2b0a7be5e2d565f893affc1f6b0ce512b23ce8772da00a27e8\"" Sep 13 00:47:59.672927 env[1736]: time="2025-09-13T00:47:59.672870627Z" level=info msg="CreateContainer within sandbox \"8fdebea16e1a2d2b0a7be5e2d565f893affc1f6b0ce512b23ce8772da00a27e8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:47:59.695392 env[1736]: time="2025-09-13T00:47:59.695341838Z" level=info msg="CreateContainer within sandbox \"8fdebea16e1a2d2b0a7be5e2d565f893affc1f6b0ce512b23ce8772da00a27e8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c966be7c91fa4e1394d2bbe4d3fab5498e31c69481e591d16430b6a3b5094cf2\"" Sep 13 00:47:59.697356 env[1736]: time="2025-09-13T00:47:59.697321381Z" level=info msg="StartContainer for \"c966be7c91fa4e1394d2bbe4d3fab5498e31c69481e591d16430b6a3b5094cf2\"" Sep 13 00:47:59.722912 systemd[1]: Started cri-containerd-c966be7c91fa4e1394d2bbe4d3fab5498e31c69481e591d16430b6a3b5094cf2.scope. Sep 13 00:47:59.757236 env[1736]: time="2025-09-13T00:47:59.757183527Z" level=info msg="StartContainer for \"c966be7c91fa4e1394d2bbe4d3fab5498e31c69481e591d16430b6a3b5094cf2\" returns successfully" Sep 13 00:47:59.778683 systemd[1]: cri-containerd-c966be7c91fa4e1394d2bbe4d3fab5498e31c69481e591d16430b6a3b5094cf2.scope: Deactivated successfully. Sep 13 00:47:59.827380 env[1736]: time="2025-09-13T00:47:59.827330076Z" level=info msg="shim disconnected" id=c966be7c91fa4e1394d2bbe4d3fab5498e31c69481e591d16430b6a3b5094cf2 Sep 13 00:47:59.827380 env[1736]: time="2025-09-13T00:47:59.827377523Z" level=warning msg="cleaning up after shim disconnected" id=c966be7c91fa4e1394d2bbe4d3fab5498e31c69481e591d16430b6a3b5094cf2 namespace=k8s.io Sep 13 00:47:59.827380 env[1736]: time="2025-09-13T00:47:59.827387294Z" level=info msg="cleaning up dead shim" Sep 13 00:47:59.835985 env[1736]: time="2025-09-13T00:47:59.835939469Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:47:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4963 runtime=io.containerd.runc.v2\n" Sep 13 00:47:59.968908 kubelet[2592]: E0913 00:47:59.968618 2592 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:48:00.257690 env[1736]: time="2025-09-13T00:48:00.254679091Z" level=info msg="CreateContainer within sandbox \"8fdebea16e1a2d2b0a7be5e2d565f893affc1f6b0ce512b23ce8772da00a27e8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:48:00.281795 env[1736]: time="2025-09-13T00:48:00.281718892Z" level=info msg="CreateContainer within sandbox \"8fdebea16e1a2d2b0a7be5e2d565f893affc1f6b0ce512b23ce8772da00a27e8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d75d322ba32b130bc249cd0bf4d192dea26d82398d77867a5fa8950e44f7e537\"" Sep 13 00:48:00.282522 env[1736]: time="2025-09-13T00:48:00.282474873Z" level=info msg="StartContainer for \"d75d322ba32b130bc249cd0bf4d192dea26d82398d77867a5fa8950e44f7e537\"" Sep 13 00:48:00.319745 systemd[1]: Started cri-containerd-d75d322ba32b130bc249cd0bf4d192dea26d82398d77867a5fa8950e44f7e537.scope. Sep 13 00:48:00.364253 env[1736]: time="2025-09-13T00:48:00.364205476Z" level=info msg="StartContainer for \"d75d322ba32b130bc249cd0bf4d192dea26d82398d77867a5fa8950e44f7e537\" returns successfully" Sep 13 00:48:00.379844 systemd[1]: cri-containerd-d75d322ba32b130bc249cd0bf4d192dea26d82398d77867a5fa8950e44f7e537.scope: Deactivated successfully. Sep 13 00:48:00.416759 env[1736]: time="2025-09-13T00:48:00.416711207Z" level=info msg="shim disconnected" id=d75d322ba32b130bc249cd0bf4d192dea26d82398d77867a5fa8950e44f7e537 Sep 13 00:48:00.416759 env[1736]: time="2025-09-13T00:48:00.416755358Z" level=warning msg="cleaning up after shim disconnected" id=d75d322ba32b130bc249cd0bf4d192dea26d82398d77867a5fa8950e44f7e537 namespace=k8s.io Sep 13 00:48:00.416759 env[1736]: time="2025-09-13T00:48:00.416765526Z" level=info msg="cleaning up dead shim" Sep 13 00:48:00.426139 env[1736]: time="2025-09-13T00:48:00.426094203Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5026 runtime=io.containerd.runc.v2\n" Sep 13 00:48:00.860633 systemd[1]: run-containerd-runc-k8s.io-d75d322ba32b130bc249cd0bf4d192dea26d82398d77867a5fa8950e44f7e537-runc.UphpbE.mount: Deactivated successfully. Sep 13 00:48:00.860837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d75d322ba32b130bc249cd0bf4d192dea26d82398d77867a5fa8950e44f7e537-rootfs.mount: Deactivated successfully. Sep 13 00:48:00.867832 kubelet[2592]: I0913 00:48:00.867796 2592 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f9ba0a-791c-4033-a146-3aef6a4af43e" path="/var/lib/kubelet/pods/02f9ba0a-791c-4033-a146-3aef6a4af43e/volumes" Sep 13 00:48:00.868429 kubelet[2592]: E0913 00:48:00.868398 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-bvj78" podUID="25fa21c5-6905-44c2-ad8d-80f655ad9b6f" Sep 13 00:48:01.288566 env[1736]: time="2025-09-13T00:48:01.288508792Z" level=info msg="CreateContainer within sandbox \"8fdebea16e1a2d2b0a7be5e2d565f893affc1f6b0ce512b23ce8772da00a27e8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:48:01.318482 kubelet[2592]: W0913 00:48:01.318431 2592 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02f9ba0a_791c_4033_a146_3aef6a4af43e.slice/cri-containerd-d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4.scope WatchSource:0}: container "d73af8c36320234e1b9436ee59e704344dd9e4fc9170f0a1720a5950ec8339d4" in namespace "k8s.io": not found Sep 13 00:48:01.471630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2753406468.mount: Deactivated successfully. Sep 13 00:48:01.546056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373575522.mount: Deactivated successfully. Sep 13 00:48:01.634393 env[1736]: time="2025-09-13T00:48:01.634326171Z" level=info msg="CreateContainer within sandbox \"8fdebea16e1a2d2b0a7be5e2d565f893affc1f6b0ce512b23ce8772da00a27e8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"038b42d8a79143c5770b94f9f2688bb49f43c46091bd29399037346cc4bc0ca4\"" Sep 13 00:48:01.730489 env[1736]: time="2025-09-13T00:48:01.730436989Z" level=info msg="StartContainer for \"038b42d8a79143c5770b94f9f2688bb49f43c46091bd29399037346cc4bc0ca4\"" Sep 13 00:48:01.969562 kubelet[2592]: E0913 00:48:01.969429 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-bvj78" podUID="25fa21c5-6905-44c2-ad8d-80f655ad9b6f" Sep 13 00:48:02.205140 systemd[1]: Started cri-containerd-038b42d8a79143c5770b94f9f2688bb49f43c46091bd29399037346cc4bc0ca4.scope. Sep 13 00:48:02.409429 systemd[1]: run-containerd-runc-k8s.io-038b42d8a79143c5770b94f9f2688bb49f43c46091bd29399037346cc4bc0ca4-runc.guunh8.mount: Deactivated successfully. Sep 13 00:48:02.721713 env[1736]: time="2025-09-13T00:48:02.721560920Z" level=info msg="StartContainer for \"038b42d8a79143c5770b94f9f2688bb49f43c46091bd29399037346cc4bc0ca4\" returns successfully" Sep 13 00:48:02.784663 systemd[1]: cri-containerd-038b42d8a79143c5770b94f9f2688bb49f43c46091bd29399037346cc4bc0ca4.scope: Deactivated successfully. Sep 13 00:48:02.856711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-038b42d8a79143c5770b94f9f2688bb49f43c46091bd29399037346cc4bc0ca4-rootfs.mount: Deactivated successfully. Sep 13 00:48:02.881928 env[1736]: time="2025-09-13T00:48:02.881864073Z" level=info msg="shim disconnected" id=038b42d8a79143c5770b94f9f2688bb49f43c46091bd29399037346cc4bc0ca4 Sep 13 00:48:02.881928 env[1736]: time="2025-09-13T00:48:02.881922777Z" level=warning msg="cleaning up after shim disconnected" id=038b42d8a79143c5770b94f9f2688bb49f43c46091bd29399037346cc4bc0ca4 namespace=k8s.io Sep 13 00:48:02.881928 env[1736]: time="2025-09-13T00:48:02.881935250Z" level=info msg="cleaning up dead shim" Sep 13 00:48:02.896983 env[1736]: time="2025-09-13T00:48:02.896921403Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5088 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T00:48:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Sep 13 00:48:03.527691 env[1736]: time="2025-09-13T00:48:03.527655653Z" level=info msg="CreateContainer within sandbox \"8fdebea16e1a2d2b0a7be5e2d565f893affc1f6b0ce512b23ce8772da00a27e8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:48:03.556119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159482947.mount: Deactivated successfully. Sep 13 00:48:03.573360 env[1736]: time="2025-09-13T00:48:03.573304176Z" level=info msg="CreateContainer within sandbox \"8fdebea16e1a2d2b0a7be5e2d565f893affc1f6b0ce512b23ce8772da00a27e8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"30b61c23060cb8dfc1b9147245dea54010410f163e778809e4c4c7dcaa8447cc\"" Sep 13 00:48:03.574252 env[1736]: time="2025-09-13T00:48:03.574211964Z" level=info msg="StartContainer for \"30b61c23060cb8dfc1b9147245dea54010410f163e778809e4c4c7dcaa8447cc\"" Sep 13 00:48:03.605297 systemd[1]: Started cri-containerd-30b61c23060cb8dfc1b9147245dea54010410f163e778809e4c4c7dcaa8447cc.scope. Sep 13 00:48:03.639446 systemd[1]: cri-containerd-30b61c23060cb8dfc1b9147245dea54010410f163e778809e4c4c7dcaa8447cc.scope: Deactivated successfully. Sep 13 00:48:03.641813 env[1736]: time="2025-09-13T00:48:03.641775772Z" level=info msg="StartContainer for \"30b61c23060cb8dfc1b9147245dea54010410f163e778809e4c4c7dcaa8447cc\" returns successfully" Sep 13 00:48:03.680391 env[1736]: time="2025-09-13T00:48:03.680320192Z" level=info msg="shim disconnected" id=30b61c23060cb8dfc1b9147245dea54010410f163e778809e4c4c7dcaa8447cc Sep 13 00:48:03.680883 env[1736]: time="2025-09-13T00:48:03.680859328Z" level=warning msg="cleaning up after shim disconnected" id=30b61c23060cb8dfc1b9147245dea54010410f163e778809e4c4c7dcaa8447cc namespace=k8s.io Sep 13 00:48:03.681015 env[1736]: time="2025-09-13T00:48:03.681001482Z" level=info msg="cleaning up dead shim" Sep 13 00:48:03.691939 env[1736]: time="2025-09-13T00:48:03.691853509Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5146 runtime=io.containerd.runc.v2\n" Sep 13 00:48:03.865851 kubelet[2592]: E0913 00:48:03.865702 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-bvj78" podUID="25fa21c5-6905-44c2-ad8d-80f655ad9b6f" Sep 13 00:48:04.533226 env[1736]: time="2025-09-13T00:48:04.533176213Z" level=info msg="CreateContainer within sandbox \"8fdebea16e1a2d2b0a7be5e2d565f893affc1f6b0ce512b23ce8772da00a27e8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:48:04.553699 systemd[1]: run-containerd-runc-k8s.io-30b61c23060cb8dfc1b9147245dea54010410f163e778809e4c4c7dcaa8447cc-runc.NkhfQx.mount: Deactivated successfully. Sep 13 00:48:04.553835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30b61c23060cb8dfc1b9147245dea54010410f163e778809e4c4c7dcaa8447cc-rootfs.mount: Deactivated successfully. Sep 13 00:48:04.564017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1106156648.mount: Deactivated successfully. Sep 13 00:48:04.573329 env[1736]: time="2025-09-13T00:48:04.573283262Z" level=info msg="CreateContainer within sandbox \"8fdebea16e1a2d2b0a7be5e2d565f893affc1f6b0ce512b23ce8772da00a27e8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2cb5300132d59461d0a1c034b3d25e9cb8a19f444375733e635f632134c5d38f\"" Sep 13 00:48:04.574267 env[1736]: time="2025-09-13T00:48:04.574232808Z" level=info msg="StartContainer for \"2cb5300132d59461d0a1c034b3d25e9cb8a19f444375733e635f632134c5d38f\"" Sep 13 00:48:04.601244 systemd[1]: Started cri-containerd-2cb5300132d59461d0a1c034b3d25e9cb8a19f444375733e635f632134c5d38f.scope. Sep 13 00:48:04.646033 env[1736]: time="2025-09-13T00:48:04.645988403Z" level=info msg="StartContainer for \"2cb5300132d59461d0a1c034b3d25e9cb8a19f444375733e635f632134c5d38f\" returns successfully" Sep 13 00:48:04.867095 kubelet[2592]: E0913 00:48:04.866703 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-kl42b" podUID="09871489-bb2a-48b6-8c3c-bffb898934cb" Sep 13 00:48:04.920122 kubelet[2592]: W0913 00:48:04.920053 2592 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5a83c3b_539b_49c9_9551_187c85fc553d.slice/cri-containerd-c966be7c91fa4e1394d2bbe4d3fab5498e31c69481e591d16430b6a3b5094cf2.scope WatchSource:0}: task c966be7c91fa4e1394d2bbe4d3fab5498e31c69481e591d16430b6a3b5094cf2 not found Sep 13 00:48:05.288630 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:48:06.930926 systemd[1]: run-containerd-runc-k8s.io-2cb5300132d59461d0a1c034b3d25e9cb8a19f444375733e635f632134c5d38f-runc.A7attW.mount: Deactivated successfully. Sep 13 00:48:08.028868 kubelet[2592]: W0913 00:48:08.028792 2592 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5a83c3b_539b_49c9_9551_187c85fc553d.slice/cri-containerd-d75d322ba32b130bc249cd0bf4d192dea26d82398d77867a5fa8950e44f7e537.scope WatchSource:0}: task d75d322ba32b130bc249cd0bf4d192dea26d82398d77867a5fa8950e44f7e537 not found Sep 13 00:48:08.327948 systemd-networkd[1470]: lxc_health: Link UP Sep 13 00:48:08.333466 (udev-worker)[5707]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:48:08.349853 systemd-networkd[1470]: lxc_health: Gained carrier Sep 13 00:48:08.350685 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:48:09.632221 kubelet[2592]: I0913 00:48:09.632152 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rrfq5" podStartSLOduration=10.632134145 podStartE2EDuration="10.632134145s" podCreationTimestamp="2025-09-13 00:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:48:05.552756336 +0000 UTC m=+130.857878842" watchObservedRunningTime="2025-09-13 00:48:09.632134145 +0000 UTC m=+134.937256653" Sep 13 00:48:09.777620 systemd-networkd[1470]: lxc_health: Gained IPv6LL Sep 13 00:48:11.142914 kubelet[2592]: W0913 00:48:11.142770 2592 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5a83c3b_539b_49c9_9551_187c85fc553d.slice/cri-containerd-038b42d8a79143c5770b94f9f2688bb49f43c46091bd29399037346cc4bc0ca4.scope WatchSource:0}: task 038b42d8a79143c5770b94f9f2688bb49f43c46091bd29399037346cc4bc0ca4 not found Sep 13 00:48:11.504772 systemd[1]: run-containerd-runc-k8s.io-2cb5300132d59461d0a1c034b3d25e9cb8a19f444375733e635f632134c5d38f-runc.u80355.mount: Deactivated successfully. Sep 13 00:48:13.832717 sshd[4752]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:13.836848 systemd[1]: sshd@23-172.31.31.206:22-147.75.109.163:54958.service: Deactivated successfully. Sep 13 00:48:13.837863 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:48:13.838500 systemd-logind[1731]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:48:13.840518 systemd-logind[1731]: Removed session 24. Sep 13 00:48:14.251217 kubelet[2592]: W0913 00:48:14.251097 2592 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5a83c3b_539b_49c9_9551_187c85fc553d.slice/cri-containerd-30b61c23060cb8dfc1b9147245dea54010410f163e778809e4c4c7dcaa8447cc.scope WatchSource:0}: task 30b61c23060cb8dfc1b9147245dea54010410f163e778809e4c4c7dcaa8447cc not found