Sep 6 00:03:01.000893 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 6 00:03:01.000930 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 5 23:00:12 -00 2025 Sep 6 00:03:01.000953 kernel: efi: EFI v2.70 by EDK II Sep 6 00:03:01.000969 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x716fcf98 Sep 6 00:03:01.000983 kernel: ACPI: Early table checksum verification disabled Sep 6 00:03:01.000997 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 6 00:03:01.001013 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 6 00:03:01.001028 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 6 00:03:01.001042 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 6 00:03:01.001056 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 6 00:03:01.001075 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 6 00:03:01.001090 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 6 00:03:01.001104 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 6 00:03:01.001118 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 6 00:03:01.001135 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 6 00:03:01.001154 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 6 00:03:01.001169 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 6 00:03:01.001184 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 6 00:03:01.001198 kernel: printk: bootconsole [uart0] enabled Sep 6 00:03:01.001213 kernel: NUMA: Failed to initialise from firmware Sep 6 00:03:01.001244 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 6 00:03:01.001320 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Sep 6 00:03:01.001339 kernel: Zone ranges: Sep 6 00:03:01.001354 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 6 00:03:01.001369 kernel: DMA32 empty Sep 6 00:03:01.001384 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 6 00:03:01.001403 kernel: Movable zone start for each node Sep 6 00:03:01.001418 kernel: Early memory node ranges Sep 6 00:03:01.001433 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 6 00:03:01.001448 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 6 00:03:01.001463 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 6 00:03:01.001478 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 6 00:03:01.001492 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 6 00:03:01.001507 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 6 00:03:01.001522 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 6 00:03:01.001536 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 6 00:03:01.001551 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 6 00:03:01.001565 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 6 00:03:01.001584 kernel: psci: probing for conduit method from ACPI. Sep 6 00:03:01.001599 kernel: psci: PSCIv1.0 detected in firmware. Sep 6 00:03:01.001620 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 00:03:01.001636 kernel: psci: Trusted OS migration not required Sep 6 00:03:01.001651 kernel: psci: SMC Calling Convention v1.1 Sep 6 00:03:01.001670 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 6 00:03:01.001686 kernel: ACPI: SRAT not present Sep 6 00:03:01.001701 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 6 00:03:01.001717 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 6 00:03:01.001733 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 6 00:03:01.001748 kernel: Detected PIPT I-cache on CPU0 Sep 6 00:03:01.001763 kernel: CPU features: detected: GIC system register CPU interface Sep 6 00:03:01.001778 kernel: CPU features: detected: Spectre-v2 Sep 6 00:03:01.001794 kernel: CPU features: detected: Spectre-v3a Sep 6 00:03:01.001809 kernel: CPU features: detected: Spectre-BHB Sep 6 00:03:01.001824 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 00:03:01.001843 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 00:03:01.001859 kernel: CPU features: detected: ARM erratum 1742098 Sep 6 00:03:01.001874 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 6 00:03:01.001889 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 6 00:03:01.001904 kernel: Policy zone: Normal Sep 6 00:03:01.001922 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:03:01.001939 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:03:01.001955 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:03:01.001971 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:03:01.001986 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:03:01.002005 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 6 00:03:01.002021 kernel: Memory: 3824460K/4030464K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 206004K reserved, 0K cma-reserved) Sep 6 00:03:01.002037 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:03:01.002052 kernel: trace event string verifier disabled Sep 6 00:03:01.002068 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 00:03:01.002084 kernel: rcu: RCU event tracing is enabled. Sep 6 00:03:01.002100 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:03:01.002115 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 00:03:01.002131 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:03:01.002147 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:03:01.002162 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:03:01.002177 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 00:03:01.002197 kernel: GICv3: 96 SPIs implemented Sep 6 00:03:01.002212 kernel: GICv3: 0 Extended SPIs implemented Sep 6 00:03:01.002227 kernel: GICv3: Distributor has no Range Selector support Sep 6 00:03:01.002279 kernel: Root IRQ handler: gic_handle_irq Sep 6 00:03:01.008283 kernel: GICv3: 16 PPIs implemented Sep 6 00:03:01.008307 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 6 00:03:01.008324 kernel: ACPI: SRAT not present Sep 6 00:03:01.008340 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 6 00:03:01.008356 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Sep 6 00:03:01.008372 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Sep 6 00:03:01.008388 kernel: GICv3: using LPI property table @0x00000004000b0000 Sep 6 00:03:01.008411 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 6 00:03:01.008427 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Sep 6 00:03:01.008443 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 6 00:03:01.008459 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 6 00:03:01.008475 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 6 00:03:01.008490 kernel: Console: colour dummy device 80x25 Sep 6 00:03:01.008507 kernel: printk: console [tty1] enabled Sep 6 00:03:01.008523 kernel: ACPI: Core revision 20210730 Sep 6 00:03:01.008539 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 6 00:03:01.008556 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:03:01.008575 kernel: LSM: Security Framework initializing Sep 6 00:03:01.008591 kernel: SELinux: Initializing. Sep 6 00:03:01.008607 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:03:01.008623 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:03:01.008639 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:03:01.008655 kernel: Platform MSI: ITS@0x10080000 domain created Sep 6 00:03:01.008671 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 6 00:03:01.008687 kernel: Remapping and enabling EFI services. Sep 6 00:03:01.008703 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:03:01.008719 kernel: Detected PIPT I-cache on CPU1 Sep 6 00:03:01.008739 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 6 00:03:01.008755 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Sep 6 00:03:01.008772 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 6 00:03:01.008787 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:03:01.008803 kernel: SMP: Total of 2 processors activated. Sep 6 00:03:01.008819 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 00:03:01.008834 kernel: CPU features: detected: 32-bit EL1 Support Sep 6 00:03:01.008850 kernel: CPU features: detected: CRC32 instructions Sep 6 00:03:01.008866 kernel: CPU: All CPU(s) started at EL1 Sep 6 00:03:01.008885 kernel: alternatives: patching kernel code Sep 6 00:03:01.008901 kernel: devtmpfs: initialized Sep 6 00:03:01.008927 kernel: KASLR disabled due to lack of seed Sep 6 00:03:01.008947 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:03:01.008964 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:03:01.008981 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:03:01.008997 kernel: SMBIOS 3.0.0 present. Sep 6 00:03:01.009013 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 6 00:03:01.009030 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:03:01.009046 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 00:03:01.009063 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 00:03:01.009084 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 00:03:01.009100 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:03:01.009117 kernel: audit: type=2000 audit(0.296:1): state=initialized audit_enabled=0 res=1 Sep 6 00:03:01.009133 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:03:01.009150 kernel: cpuidle: using governor menu Sep 6 00:03:01.009170 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 00:03:01.009187 kernel: ASID allocator initialised with 32768 entries Sep 6 00:03:01.009203 kernel: ACPI: bus type PCI registered Sep 6 00:03:01.009220 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:03:01.009255 kernel: Serial: AMBA PL011 UART driver Sep 6 00:03:01.009275 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:03:01.009293 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 00:03:01.009310 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:03:01.009326 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 00:03:01.009347 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:03:01.009364 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 00:03:01.009381 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:03:01.009397 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:03:01.009413 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:03:01.009430 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:03:01.009446 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:03:01.009463 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:03:01.009479 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:03:01.009495 kernel: ACPI: Interpreter enabled Sep 6 00:03:01.009516 kernel: ACPI: Using GIC for interrupt routing Sep 6 00:03:01.009532 kernel: ACPI: MCFG table detected, 1 entries Sep 6 00:03:01.009549 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 6 00:03:01.009834 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:03:01.010030 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 00:03:01.010220 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 00:03:01.010480 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 6 00:03:01.010680 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 6 00:03:01.010703 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 6 00:03:01.010720 kernel: acpiphp: Slot [1] registered Sep 6 00:03:01.010737 kernel: acpiphp: Slot [2] registered Sep 6 00:03:01.010754 kernel: acpiphp: Slot [3] registered Sep 6 00:03:01.010770 kernel: acpiphp: Slot [4] registered Sep 6 00:03:01.010787 kernel: acpiphp: Slot [5] registered Sep 6 00:03:01.010804 kernel: acpiphp: Slot [6] registered Sep 6 00:03:01.010820 kernel: acpiphp: Slot [7] registered Sep 6 00:03:01.010842 kernel: acpiphp: Slot [8] registered Sep 6 00:03:01.010859 kernel: acpiphp: Slot [9] registered Sep 6 00:03:01.010875 kernel: acpiphp: Slot [10] registered Sep 6 00:03:01.010892 kernel: acpiphp: Slot [11] registered Sep 6 00:03:01.010909 kernel: acpiphp: Slot [12] registered Sep 6 00:03:01.010925 kernel: acpiphp: Slot [13] registered Sep 6 00:03:01.010941 kernel: acpiphp: Slot [14] registered Sep 6 00:03:01.010958 kernel: acpiphp: Slot [15] registered Sep 6 00:03:01.010975 kernel: acpiphp: Slot [16] registered Sep 6 00:03:01.010995 kernel: acpiphp: Slot [17] registered Sep 6 00:03:01.011012 kernel: acpiphp: Slot [18] registered Sep 6 00:03:01.011028 kernel: acpiphp: Slot [19] registered Sep 6 00:03:01.011045 kernel: acpiphp: Slot [20] registered Sep 6 00:03:01.011061 kernel: acpiphp: Slot [21] registered Sep 6 00:03:01.011077 kernel: acpiphp: Slot [22] registered Sep 6 00:03:01.011094 kernel: acpiphp: Slot [23] registered Sep 6 00:03:01.011110 kernel: acpiphp: Slot [24] registered Sep 6 00:03:01.011126 kernel: acpiphp: Slot [25] registered Sep 6 00:03:01.011143 kernel: acpiphp: Slot [26] registered Sep 6 00:03:01.011163 kernel: acpiphp: Slot [27] registered Sep 6 00:03:01.011179 kernel: acpiphp: Slot [28] registered Sep 6 00:03:01.011196 kernel: acpiphp: Slot [29] registered Sep 6 00:03:01.011212 kernel: acpiphp: Slot [30] registered Sep 6 00:03:01.011244 kernel: acpiphp: Slot [31] registered Sep 6 00:03:01.011268 kernel: PCI host bridge to bus 0000:00 Sep 6 00:03:01.019596 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 6 00:03:01.019792 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 6 00:03:01.019974 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 6 00:03:01.020146 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 6 00:03:01.020388 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 6 00:03:01.020614 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 6 00:03:01.020819 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 6 00:03:01.021025 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 6 00:03:01.021244 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 6 00:03:01.021452 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 6 00:03:01.021666 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 6 00:03:01.021870 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 6 00:03:01.022068 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 6 00:03:01.032838 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 6 00:03:01.033127 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 6 00:03:01.033461 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 6 00:03:01.033690 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 6 00:03:01.033925 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 6 00:03:01.034165 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 6 00:03:01.034447 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 6 00:03:01.034676 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 6 00:03:01.034870 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 6 00:03:01.035056 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 6 00:03:01.035080 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 6 00:03:01.035098 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 6 00:03:01.035115 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 6 00:03:01.035132 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 6 00:03:01.035149 kernel: iommu: Default domain type: Translated Sep 6 00:03:01.035166 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 00:03:01.035183 kernel: vgaarb: loaded Sep 6 00:03:01.035200 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:03:01.035221 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:03:01.035815 kernel: PTP clock support registered Sep 6 00:03:01.035837 kernel: Registered efivars operations Sep 6 00:03:01.035854 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 00:03:01.035871 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:03:01.035888 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:03:01.035905 kernel: pnp: PnP ACPI init Sep 6 00:03:01.036123 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 6 00:03:01.036148 kernel: pnp: PnP ACPI: found 1 devices Sep 6 00:03:01.036172 kernel: NET: Registered PF_INET protocol family Sep 6 00:03:01.036189 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:03:01.036205 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:03:01.036223 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:03:01.036272 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:03:01.036290 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 00:03:01.036307 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:03:01.036324 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:03:01.036346 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:03:01.036364 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:03:01.036380 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:03:01.036397 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 6 00:03:01.036414 kernel: kvm [1]: HYP mode not available Sep 6 00:03:01.036431 kernel: Initialise system trusted keyrings Sep 6 00:03:01.036448 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:03:01.036465 kernel: Key type asymmetric registered Sep 6 00:03:01.036481 kernel: Asymmetric key parser 'x509' registered Sep 6 00:03:01.036501 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:03:01.036518 kernel: io scheduler mq-deadline registered Sep 6 00:03:01.036535 kernel: io scheduler kyber registered Sep 6 00:03:01.036551 kernel: io scheduler bfq registered Sep 6 00:03:01.036761 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 6 00:03:01.036787 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 6 00:03:01.036804 kernel: ACPI: button: Power Button [PWRB] Sep 6 00:03:01.036822 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 6 00:03:01.036839 kernel: ACPI: button: Sleep Button [SLPB] Sep 6 00:03:01.036861 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:03:01.036878 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 6 00:03:01.037080 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 6 00:03:01.037104 kernel: printk: console [ttyS0] disabled Sep 6 00:03:01.037121 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 6 00:03:01.037138 kernel: printk: console [ttyS0] enabled Sep 6 00:03:01.037155 kernel: printk: bootconsole [uart0] disabled Sep 6 00:03:01.037172 kernel: thunder_xcv, ver 1.0 Sep 6 00:03:01.037189 kernel: thunder_bgx, ver 1.0 Sep 6 00:03:01.037210 kernel: nicpf, ver 1.0 Sep 6 00:03:01.037227 kernel: nicvf, ver 1.0 Sep 6 00:03:01.043487 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 00:03:01.043693 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T00:03:00 UTC (1757116980) Sep 6 00:03:01.043718 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 00:03:01.043735 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:03:01.043753 kernel: Segment Routing with IPv6 Sep 6 00:03:01.043770 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:03:01.043795 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:03:01.043812 kernel: Key type dns_resolver registered Sep 6 00:03:01.043829 kernel: registered taskstats version 1 Sep 6 00:03:01.043845 kernel: Loading compiled-in X.509 certificates Sep 6 00:03:01.043863 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 72ab5ba99c2368429c7a4d04fccfc5a39dd84386' Sep 6 00:03:01.043879 kernel: Key type .fscrypt registered Sep 6 00:03:01.043895 kernel: Key type fscrypt-provisioning registered Sep 6 00:03:01.043912 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:03:01.043928 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:03:01.043949 kernel: ima: No architecture policies found Sep 6 00:03:01.043966 kernel: clk: Disabling unused clocks Sep 6 00:03:01.043982 kernel: Freeing unused kernel memory: 36416K Sep 6 00:03:01.043999 kernel: Run /init as init process Sep 6 00:03:01.044015 kernel: with arguments: Sep 6 00:03:01.044031 kernel: /init Sep 6 00:03:01.044048 kernel: with environment: Sep 6 00:03:01.044065 kernel: HOME=/ Sep 6 00:03:01.044081 kernel: TERM=linux Sep 6 00:03:01.044101 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:03:01.044123 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:03:01.044145 systemd[1]: Detected virtualization amazon. Sep 6 00:03:01.044163 systemd[1]: Detected architecture arm64. Sep 6 00:03:01.044181 systemd[1]: Running in initrd. Sep 6 00:03:01.044199 systemd[1]: No hostname configured, using default hostname. Sep 6 00:03:01.044217 systemd[1]: Hostname set to . Sep 6 00:03:01.044260 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:03:01.044282 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:03:01.044300 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:03:01.044318 systemd[1]: Reached target cryptsetup.target. Sep 6 00:03:01.044337 systemd[1]: Reached target paths.target. Sep 6 00:03:01.044355 systemd[1]: Reached target slices.target. Sep 6 00:03:01.044373 systemd[1]: Reached target swap.target. Sep 6 00:03:01.044391 systemd[1]: Reached target timers.target. Sep 6 00:03:01.044413 systemd[1]: Listening on iscsid.socket. Sep 6 00:03:01.044432 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:03:01.044450 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:03:01.044468 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:03:01.044486 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:03:01.044504 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:03:01.044522 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:03:01.044540 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:03:01.044558 systemd[1]: Reached target sockets.target. Sep 6 00:03:01.044581 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:03:01.044598 systemd[1]: Finished network-cleanup.service. Sep 6 00:03:01.044616 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:03:01.044634 systemd[1]: Starting systemd-journald.service... Sep 6 00:03:01.044652 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:03:01.044670 systemd[1]: Starting systemd-resolved.service... Sep 6 00:03:01.044689 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:03:01.044707 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:03:01.044729 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:03:01.044748 kernel: audit: type=1130 audit(1757116980.983:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.044767 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:03:01.044785 kernel: audit: type=1130 audit(1757116981.010:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.044803 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:03:01.044821 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:03:01.044843 systemd-journald[310]: Journal started Sep 6 00:03:01.044937 systemd-journald[310]: Runtime Journal (/run/log/journal/ec2452b480d44e61a7feb7bb6fe5f8d9) is 8.0M, max 75.4M, 67.4M free. Sep 6 00:03:00.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.973460 systemd-modules-load[311]: Inserted module 'overlay' Sep 6 00:03:01.060502 systemd[1]: Started systemd-journald.service. Sep 6 00:03:01.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.065690 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:03:01.086428 kernel: audit: type=1130 audit(1757116981.059:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.086464 kernel: audit: type=1130 audit(1757116981.077:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.083052 systemd-resolved[312]: Positive Trust Anchors: Sep 6 00:03:01.083067 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:03:01.083119 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:03:01.099263 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:03:01.099840 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:03:01.118086 kernel: audit: type=1130 audit(1757116981.105:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.122158 systemd-modules-load[311]: Inserted module 'br_netfilter' Sep 6 00:03:01.126373 kernel: Bridge firewalling registered Sep 6 00:03:01.127970 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:03:01.161265 kernel: SCSI subsystem initialized Sep 6 00:03:01.164664 dracut-cmdline[328]: dracut-dracut-053 Sep 6 00:03:01.172822 dracut-cmdline[328]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:03:01.203894 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:03:01.203958 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:03:01.208514 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:03:01.216774 systemd-modules-load[311]: Inserted module 'dm_multipath' Sep 6 00:03:01.222553 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:03:01.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.225954 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:03:01.248284 kernel: audit: type=1130 audit(1757116981.223:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.257040 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:03:01.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.269318 kernel: audit: type=1130 audit(1757116981.257:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.352275 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:03:01.374284 kernel: iscsi: registered transport (tcp) Sep 6 00:03:01.401541 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:03:01.401623 kernel: QLogic iSCSI HBA Driver Sep 6 00:03:01.575263 kernel: random: crng init done Sep 6 00:03:01.575597 systemd-resolved[312]: Defaulting to hostname 'linux'. Sep 6 00:03:01.580370 systemd[1]: Started systemd-resolved.service. Sep 6 00:03:01.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.582449 systemd[1]: Reached target nss-lookup.target. Sep 6 00:03:01.598547 kernel: audit: type=1130 audit(1757116981.581:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.598528 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:03:01.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.609616 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:03:01.613482 kernel: audit: type=1130 audit(1757116981.599:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.676295 kernel: raid6: neonx8 gen() 6399 MB/s Sep 6 00:03:01.694275 kernel: raid6: neonx8 xor() 4642 MB/s Sep 6 00:03:01.712265 kernel: raid6: neonx4 gen() 6591 MB/s Sep 6 00:03:01.730270 kernel: raid6: neonx4 xor() 4833 MB/s Sep 6 00:03:01.748266 kernel: raid6: neonx2 gen() 5831 MB/s Sep 6 00:03:01.766269 kernel: raid6: neonx2 xor() 4430 MB/s Sep 6 00:03:01.784265 kernel: raid6: neonx1 gen() 4519 MB/s Sep 6 00:03:01.802269 kernel: raid6: neonx1 xor() 3611 MB/s Sep 6 00:03:01.820265 kernel: raid6: int64x8 gen() 3446 MB/s Sep 6 00:03:01.838270 kernel: raid6: int64x8 xor() 2065 MB/s Sep 6 00:03:01.856265 kernel: raid6: int64x4 gen() 3864 MB/s Sep 6 00:03:01.874272 kernel: raid6: int64x4 xor() 2174 MB/s Sep 6 00:03:01.892263 kernel: raid6: int64x2 gen() 3629 MB/s Sep 6 00:03:01.910271 kernel: raid6: int64x2 xor() 1929 MB/s Sep 6 00:03:01.928264 kernel: raid6: int64x1 gen() 2767 MB/s Sep 6 00:03:01.947747 kernel: raid6: int64x1 xor() 1407 MB/s Sep 6 00:03:01.947776 kernel: raid6: using algorithm neonx4 gen() 6591 MB/s Sep 6 00:03:01.947800 kernel: raid6: .... xor() 4833 MB/s, rmw enabled Sep 6 00:03:01.949636 kernel: raid6: using neon recovery algorithm Sep 6 00:03:01.969730 kernel: xor: measuring software checksum speed Sep 6 00:03:01.969793 kernel: 8regs : 9154 MB/sec Sep 6 00:03:01.971612 kernel: 32regs : 11088 MB/sec Sep 6 00:03:01.973543 kernel: arm64_neon : 9554 MB/sec Sep 6 00:03:01.973572 kernel: xor: using function: 32regs (11088 MB/sec) Sep 6 00:03:02.071287 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 6 00:03:02.087738 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:03:02.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:02.090000 audit: BPF prog-id=7 op=LOAD Sep 6 00:03:02.090000 audit: BPF prog-id=8 op=LOAD Sep 6 00:03:02.092579 systemd[1]: Starting systemd-udevd.service... Sep 6 00:03:02.121813 systemd-udevd[510]: Using default interface naming scheme 'v252'. Sep 6 00:03:02.132942 systemd[1]: Started systemd-udevd.service. Sep 6 00:03:02.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:02.141068 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:03:02.169824 dracut-pre-trigger[520]: rd.md=0: removing MD RAID activation Sep 6 00:03:02.230043 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:03:02.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:02.233328 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:03:02.338914 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:03:02.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:02.465052 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 6 00:03:02.465117 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 6 00:03:02.484511 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 6 00:03:02.484742 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 6 00:03:02.484961 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 6 00:03:02.484987 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 6 00:03:02.485215 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:57:48:e1:dc:cd Sep 6 00:03:02.491291 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 6 00:03:02.499081 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:03:02.499125 kernel: GPT:9289727 != 16777215 Sep 6 00:03:02.499148 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:03:02.501350 kernel: GPT:9289727 != 16777215 Sep 6 00:03:02.502605 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:03:02.504530 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:03:02.511621 (udev-worker)[569]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:03:02.583283 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (572) Sep 6 00:03:02.654803 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:03:02.679541 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:03:02.714957 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:03:02.735978 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:03:02.741475 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:03:02.747111 systemd[1]: Starting disk-uuid.service... Sep 6 00:03:02.757974 disk-uuid[673]: Primary Header is updated. Sep 6 00:03:02.757974 disk-uuid[673]: Secondary Entries is updated. Sep 6 00:03:02.757974 disk-uuid[673]: Secondary Header is updated. Sep 6 00:03:02.768363 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:03:02.778276 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:03:02.786282 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:03:03.784266 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:03:03.784726 disk-uuid[674]: The operation has completed successfully. Sep 6 00:03:03.950962 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:03:03.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:03.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:03.951170 systemd[1]: Finished disk-uuid.service. Sep 6 00:03:03.976134 systemd[1]: Starting verity-setup.service... Sep 6 00:03:04.005263 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 00:03:04.091848 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:03:04.097005 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:03:04.105886 systemd[1]: Finished verity-setup.service. Sep 6 00:03:04.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.194313 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:03:04.194651 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:03:04.198567 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:03:04.202791 systemd[1]: Starting ignition-setup.service... Sep 6 00:03:04.209662 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:03:04.240626 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:03:04.240686 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:03:04.243133 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:03:04.279744 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:03:04.298392 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:03:04.324664 systemd[1]: Finished ignition-setup.service. Sep 6 00:03:04.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.331806 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:03:04.371463 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:03:04.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.375000 audit: BPF prog-id=9 op=LOAD Sep 6 00:03:04.377423 systemd[1]: Starting systemd-networkd.service... Sep 6 00:03:04.427931 systemd-networkd[1197]: lo: Link UP Sep 6 00:03:04.427953 systemd-networkd[1197]: lo: Gained carrier Sep 6 00:03:04.431828 systemd-networkd[1197]: Enumeration completed Sep 6 00:03:04.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.432402 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:03:04.432591 systemd[1]: Started systemd-networkd.service. Sep 6 00:03:04.436135 systemd[1]: Reached target network.target. Sep 6 00:03:04.436946 systemd-networkd[1197]: eth0: Link UP Sep 6 00:03:04.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.436954 systemd-networkd[1197]: eth0: Gained carrier Sep 6 00:03:04.440536 systemd[1]: Starting iscsiuio.service... Sep 6 00:03:04.463881 systemd[1]: Started iscsiuio.service. Sep 6 00:03:04.474432 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.29.77/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 6 00:03:04.477446 systemd[1]: Starting iscsid.service... Sep 6 00:03:04.492535 iscsid[1202]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:03:04.492535 iscsid[1202]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:03:04.492535 iscsid[1202]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:03:04.492535 iscsid[1202]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:03:04.492535 iscsid[1202]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:03:04.515088 iscsid[1202]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:03:04.523555 systemd[1]: Started iscsid.service. Sep 6 00:03:04.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.534376 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:03:04.557097 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:03:04.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.561189 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:03:04.565051 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:03:04.569005 systemd[1]: Reached target remote-fs.target. Sep 6 00:03:04.573916 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:03:04.593879 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:03:04.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.880968 ignition[1175]: Ignition 2.14.0 Sep 6 00:03:04.880996 ignition[1175]: Stage: fetch-offline Sep 6 00:03:04.881555 ignition[1175]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:04.881624 ignition[1175]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:04.911986 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:04.912920 ignition[1175]: Ignition finished successfully Sep 6 00:03:04.918407 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:03:04.931556 kernel: kauditd_printk_skb: 17 callbacks suppressed Sep 6 00:03:04.931635 kernel: audit: type=1130 audit(1757116984.920:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.923590 systemd[1]: Starting ignition-fetch.service... Sep 6 00:03:04.942545 ignition[1221]: Ignition 2.14.0 Sep 6 00:03:04.943033 ignition[1221]: Stage: fetch Sep 6 00:03:04.943358 ignition[1221]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:04.943441 ignition[1221]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:04.957936 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:04.960583 ignition[1221]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:04.978616 ignition[1221]: INFO : PUT result: OK Sep 6 00:03:04.983560 ignition[1221]: DEBUG : parsed url from cmdline: "" Sep 6 00:03:04.983560 ignition[1221]: INFO : no config URL provided Sep 6 00:03:04.983560 ignition[1221]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:03:04.989901 ignition[1221]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 6 00:03:04.989901 ignition[1221]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:04.994867 ignition[1221]: INFO : PUT result: OK Sep 6 00:03:04.996523 ignition[1221]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 6 00:03:04.999970 ignition[1221]: INFO : GET result: OK Sep 6 00:03:05.001673 ignition[1221]: DEBUG : parsing config with SHA512: af5b0d537810df462ace01cd3ca243a19e757c47e3016145863443d4b37e3eea6cc15bb9b106ed901ba61badad0b2838e2feef42e10536fa2819a2cdbf7cab63 Sep 6 00:03:05.013394 unknown[1221]: fetched base config from "system" Sep 6 00:03:05.013443 unknown[1221]: fetched base config from "system" Sep 6 00:03:05.013458 unknown[1221]: fetched user config from "aws" Sep 6 00:03:05.020082 ignition[1221]: fetch: fetch complete Sep 6 00:03:05.020130 ignition[1221]: fetch: fetch passed Sep 6 00:03:05.021617 ignition[1221]: Ignition finished successfully Sep 6 00:03:05.025166 systemd[1]: Finished ignition-fetch.service. Sep 6 00:03:05.040309 kernel: audit: type=1130 audit(1757116985.025:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.028467 systemd[1]: Starting ignition-kargs.service... Sep 6 00:03:05.050633 ignition[1227]: Ignition 2.14.0 Sep 6 00:03:05.050648 ignition[1227]: Stage: kargs Sep 6 00:03:05.050931 ignition[1227]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:05.050983 ignition[1227]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:05.065556 ignition[1227]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:05.068861 ignition[1227]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:05.074426 ignition[1227]: INFO : PUT result: OK Sep 6 00:03:05.079335 ignition[1227]: kargs: kargs passed Sep 6 00:03:05.079614 ignition[1227]: Ignition finished successfully Sep 6 00:03:05.084019 systemd[1]: Finished ignition-kargs.service. Sep 6 00:03:05.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.091808 systemd[1]: Starting ignition-disks.service... Sep 6 00:03:05.101326 kernel: audit: type=1130 audit(1757116985.089:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.108410 ignition[1233]: Ignition 2.14.0 Sep 6 00:03:05.110314 ignition[1233]: Stage: disks Sep 6 00:03:05.111972 ignition[1233]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:05.114578 ignition[1233]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:05.128772 ignition[1233]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:05.131356 ignition[1233]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:05.134419 ignition[1233]: INFO : PUT result: OK Sep 6 00:03:05.143481 ignition[1233]: disks: disks passed Sep 6 00:03:05.143808 ignition[1233]: Ignition finished successfully Sep 6 00:03:05.148043 systemd[1]: Finished ignition-disks.service. Sep 6 00:03:05.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.151613 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:03:05.170184 kernel: audit: type=1130 audit(1757116985.150:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.161179 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:03:05.163114 systemd[1]: Reached target local-fs.target. Sep 6 00:03:05.164870 systemd[1]: Reached target sysinit.target. Sep 6 00:03:05.166559 systemd[1]: Reached target basic.target. Sep 6 00:03:05.177391 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:03:05.224848 systemd-fsck[1241]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 6 00:03:05.229250 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:03:05.241771 kernel: audit: type=1130 audit(1757116985.230:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.232720 systemd[1]: Mounting sysroot.mount... Sep 6 00:03:05.260276 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:03:05.262179 systemd[1]: Mounted sysroot.mount. Sep 6 00:03:05.262505 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:03:05.277402 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:03:05.286109 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:03:05.286197 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:03:05.286291 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:03:05.298120 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:03:05.323529 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:03:05.333674 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:03:05.353269 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1258) Sep 6 00:03:05.354124 initrd-setup-root[1263]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:03:05.364642 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:03:05.364705 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:03:05.366884 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:03:05.377293 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:03:05.377375 initrd-setup-root[1289]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:03:05.388324 initrd-setup-root[1297]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:03:05.388799 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:03:05.401700 initrd-setup-root[1305]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:03:05.616087 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:03:05.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.620025 systemd[1]: Starting ignition-mount.service... Sep 6 00:03:05.632275 kernel: audit: type=1130 audit(1757116985.616:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.633085 systemd[1]: Starting sysroot-boot.service... Sep 6 00:03:05.643987 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 00:03:05.644180 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 00:03:05.667537 ignition[1323]: INFO : Ignition 2.14.0 Sep 6 00:03:05.667537 ignition[1323]: INFO : Stage: mount Sep 6 00:03:05.671271 ignition[1323]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:05.671271 ignition[1323]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:05.691667 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:05.694502 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:05.698433 ignition[1323]: INFO : PUT result: OK Sep 6 00:03:05.709279 ignition[1323]: INFO : mount: mount passed Sep 6 00:03:05.709279 ignition[1323]: INFO : Ignition finished successfully Sep 6 00:03:05.713612 systemd[1]: Finished sysroot-boot.service. Sep 6 00:03:05.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.717418 systemd[1]: Finished ignition-mount.service. Sep 6 00:03:05.727120 kernel: audit: type=1130 audit(1757116985.715:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.728946 systemd[1]: Starting ignition-files.service... Sep 6 00:03:05.738333 kernel: audit: type=1130 audit(1757116985.726:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.744389 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:03:05.766273 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1333) Sep 6 00:03:05.772142 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:03:05.772186 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:03:05.772260 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:03:05.774563 systemd-networkd[1197]: eth0: Gained IPv6LL Sep 6 00:03:05.791264 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:03:05.796609 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:03:05.816215 ignition[1352]: INFO : Ignition 2.14.0 Sep 6 00:03:05.816215 ignition[1352]: INFO : Stage: files Sep 6 00:03:05.820033 ignition[1352]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:05.820033 ignition[1352]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:05.836295 ignition[1352]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:05.839359 ignition[1352]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:05.843083 ignition[1352]: INFO : PUT result: OK Sep 6 00:03:05.849337 ignition[1352]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:03:05.854403 ignition[1352]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:03:05.857510 ignition[1352]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:03:05.889548 ignition[1352]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:03:05.892863 ignition[1352]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:03:05.897577 unknown[1352]: wrote ssh authorized keys file for user: core Sep 6 00:03:05.900849 ignition[1352]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:03:05.908610 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:03:05.912461 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:03:05.912461 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 00:03:05.912461 ignition[1352]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 6 00:03:06.009161 ignition[1352]: INFO : GET result: OK Sep 6 00:03:06.410933 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 00:03:06.415341 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:03:06.415341 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:03:06.422954 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:03:06.422954 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:03:06.422954 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 6 00:03:06.422954 ignition[1352]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:03:06.449665 ignition[1352]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1163431505" Sep 6 00:03:06.449665 ignition[1352]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1163431505": device or resource busy Sep 6 00:03:06.449665 ignition[1352]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1163431505", trying btrfs: device or resource busy Sep 6 00:03:06.449665 ignition[1352]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1163431505" Sep 6 00:03:06.463803 ignition[1352]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1163431505" Sep 6 00:03:06.479603 ignition[1352]: INFO : op(3): [started] unmounting "/mnt/oem1163431505" Sep 6 00:03:06.479603 ignition[1352]: INFO : op(3): [finished] unmounting "/mnt/oem1163431505" Sep 6 00:03:06.479603 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 6 00:03:06.489642 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:03:06.489642 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:03:06.489642 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:03:06.489642 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:03:06.489642 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:03:06.489642 ignition[1352]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 6 00:03:06.502564 systemd[1]: mnt-oem1163431505.mount: Deactivated successfully. Sep 6 00:03:06.721066 ignition[1352]: INFO : GET result: OK Sep 6 00:03:06.885856 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:03:06.890389 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:03:06.890389 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:03:06.890389 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:03:06.890389 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:03:06.890389 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:03:06.890389 ignition[1352]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:03:06.921203 ignition[1352]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1814077726" Sep 6 00:03:06.921203 ignition[1352]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1814077726": device or resource busy Sep 6 00:03:06.921203 ignition[1352]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1814077726", trying btrfs: device or resource busy Sep 6 00:03:06.921203 ignition[1352]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1814077726" Sep 6 00:03:06.921203 ignition[1352]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1814077726" Sep 6 00:03:06.921203 ignition[1352]: INFO : op(6): [started] unmounting "/mnt/oem1814077726" Sep 6 00:03:06.921203 ignition[1352]: INFO : op(6): [finished] unmounting "/mnt/oem1814077726" Sep 6 00:03:06.921203 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:03:06.921203 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:03:06.921203 ignition[1352]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 6 00:03:06.959779 systemd[1]: mnt-oem1814077726.mount: Deactivated successfully. Sep 6 00:03:07.327543 ignition[1352]: INFO : GET result: OK Sep 6 00:03:07.833203 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:03:07.837959 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 6 00:03:07.842257 ignition[1352]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:03:07.854356 ignition[1352]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1011982805" Sep 6 00:03:07.857465 ignition[1352]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1011982805": device or resource busy Sep 6 00:03:07.857465 ignition[1352]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1011982805", trying btrfs: device or resource busy Sep 6 00:03:07.857465 ignition[1352]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1011982805" Sep 6 00:03:07.868970 ignition[1352]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1011982805" Sep 6 00:03:07.868970 ignition[1352]: INFO : op(9): [started] unmounting "/mnt/oem1011982805" Sep 6 00:03:07.874903 ignition[1352]: INFO : op(9): [finished] unmounting "/mnt/oem1011982805" Sep 6 00:03:07.874903 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 6 00:03:07.874903 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 6 00:03:07.874903 ignition[1352]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:03:07.905955 ignition[1352]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3939095573" Sep 6 00:03:07.905955 ignition[1352]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3939095573": device or resource busy Sep 6 00:03:07.905955 ignition[1352]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3939095573", trying btrfs: device or resource busy Sep 6 00:03:07.918486 ignition[1352]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3939095573" Sep 6 00:03:07.918486 ignition[1352]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3939095573" Sep 6 00:03:07.918486 ignition[1352]: INFO : op(c): [started] unmounting "/mnt/oem3939095573" Sep 6 00:03:07.918486 ignition[1352]: INFO : op(c): [finished] unmounting "/mnt/oem3939095573" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(12): [started] processing unit "amazon-ssm-agent.service" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(12): op(13): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(12): op(13): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(12): [finished] processing unit "amazon-ssm-agent.service" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(14): [started] processing unit "nvidia.service" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(14): [finished] processing unit "nvidia.service" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(15): [started] processing unit "containerd.service" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(15): [finished] processing unit "containerd.service" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(17): [started] processing unit "prepare-helm.service" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:03:07.918486 ignition[1352]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:03:07.985643 ignition[1352]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" Sep 6 00:03:07.985643 ignition[1352]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:03:07.985643 ignition[1352]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:03:07.985643 ignition[1352]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:03:07.985643 ignition[1352]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:03:07.985643 ignition[1352]: INFO : files: op(1b): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 6 00:03:07.985643 ignition[1352]: INFO : files: op(1b): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 6 00:03:07.985643 ignition[1352]: INFO : files: op(1c): [started] setting preset to enabled for "nvidia.service" Sep 6 00:03:07.985643 ignition[1352]: INFO : files: op(1c): [finished] setting preset to enabled for "nvidia.service" Sep 6 00:03:08.019311 ignition[1352]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:03:08.019311 ignition[1352]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:03:08.019311 ignition[1352]: INFO : files: files passed Sep 6 00:03:08.019311 ignition[1352]: INFO : Ignition finished successfully Sep 6 00:03:08.031378 systemd[1]: Finished ignition-files.service. Sep 6 00:03:08.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.045495 kernel: audit: type=1130 audit(1757116988.034:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.045785 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:03:08.050362 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:03:08.052601 systemd[1]: Starting ignition-quench.service... Sep 6 00:03:08.061731 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:03:08.062090 systemd[1]: Finished ignition-quench.service. Sep 6 00:03:08.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.075276 kernel: audit: type=1130 audit(1757116988.066:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.082111 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:03:08.086790 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:03:08.091156 systemd[1]: Reached target ignition-complete.target. Sep 6 00:03:08.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.096140 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:03:08.128470 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:03:08.130912 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:03:08.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.135272 systemd[1]: Reached target initrd-fs.target. Sep 6 00:03:08.138555 systemd[1]: Reached target initrd.target. Sep 6 00:03:08.141730 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:03:08.146087 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:03:08.168919 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:03:08.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.174024 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:03:08.202807 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:03:08.205080 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:03:08.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.209581 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:03:08.212914 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:03:08.220292 systemd[1]: Stopped target timers.target. Sep 6 00:03:08.223471 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:03:08.223581 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:03:08.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.229137 systemd[1]: Stopped target initrd.target. Sep 6 00:03:08.232202 systemd[1]: Stopped target basic.target. Sep 6 00:03:08.235216 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:03:08.237077 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:03:08.240626 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:03:08.244154 systemd[1]: Stopped target remote-fs.target. Sep 6 00:03:08.247471 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:03:08.250778 systemd[1]: Stopped target sysinit.target. Sep 6 00:03:08.254131 systemd[1]: Stopped target local-fs.target. Sep 6 00:03:08.257152 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:03:08.260350 systemd[1]: Stopped target swap.target. Sep 6 00:03:08.263437 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:03:08.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.264861 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:03:08.266720 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:03:08.273833 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:03:08.273930 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:03:08.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.279299 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:03:08.279383 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:03:08.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.285512 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:03:08.285590 systemd[1]: Stopped ignition-files.service. Sep 6 00:03:08.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.292012 systemd[1]: Stopping ignition-mount.service... Sep 6 00:03:08.309794 systemd[1]: Stopping iscsiuio.service... Sep 6 00:03:08.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.326960 ignition[1390]: INFO : Ignition 2.14.0 Sep 6 00:03:08.326960 ignition[1390]: INFO : Stage: umount Sep 6 00:03:08.326960 ignition[1390]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:08.326960 ignition[1390]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:08.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.313363 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:03:08.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.374663 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:08.374663 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:08.374663 ignition[1390]: INFO : PUT result: OK Sep 6 00:03:08.374663 ignition[1390]: INFO : umount: umount passed Sep 6 00:03:08.374663 ignition[1390]: INFO : Ignition finished successfully Sep 6 00:03:08.313482 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:03:08.331119 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:03:08.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.334091 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:03:08.334258 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:03:08.336381 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:03:08.336471 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:03:08.341596 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:03:08.341811 systemd[1]: Stopped iscsiuio.service. Sep 6 00:03:08.359121 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:03:08.359333 systemd[1]: Stopped ignition-mount.service. Sep 6 00:03:08.361343 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:03:08.361429 systemd[1]: Stopped ignition-disks.service. Sep 6 00:03:08.363383 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:03:08.363467 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:03:08.365293 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:03:08.365371 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:03:08.367177 systemd[1]: Stopped target network.target. Sep 6 00:03:08.368863 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:03:08.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.368946 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:03:08.369255 systemd[1]: Stopped target paths.target. Sep 6 00:03:08.369435 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:03:08.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.447000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:03:08.378845 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:03:08.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.379204 systemd[1]: Stopped target slices.target. Sep 6 00:03:08.384639 systemd[1]: Stopped target sockets.target. Sep 6 00:03:08.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.387650 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:03:08.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.387709 systemd[1]: Closed iscsid.socket. Sep 6 00:03:08.391725 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:03:08.391809 systemd[1]: Closed iscsiuio.socket. Sep 6 00:03:08.398418 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:03:08.398527 systemd[1]: Stopped ignition-setup.service. Sep 6 00:03:08.401734 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:03:08.404915 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:03:08.417918 systemd-networkd[1197]: eth0: DHCPv6 lease lost Sep 6 00:03:08.491000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:03:08.424486 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:03:08.430661 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:03:08.432729 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:03:08.441876 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:03:08.443495 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:03:08.446071 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:03:08.446268 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:03:08.450302 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:03:08.450369 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:03:08.452107 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:03:08.452189 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:03:08.462455 systemd[1]: Stopping network-cleanup.service... Sep 6 00:03:08.468391 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:03:08.468506 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:03:08.470645 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:03:08.470742 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:03:08.474033 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:03:08.474126 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:03:08.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.478112 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:03:08.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.506585 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:03:08.523586 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:03:08.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.523878 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:03:08.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.542967 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:03:08.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.543161 systemd[1]: Stopped network-cleanup.service. Sep 6 00:03:08.545741 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:03:08.545821 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:03:08.549095 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:03:08.549420 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:03:08.552902 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:03:08.552985 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:03:08.554980 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:03:08.555058 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:03:08.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.559246 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:03:08.559418 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:03:08.563090 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:03:08.566558 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:03:08.566680 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:03:08.592518 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:03:08.592718 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:03:08.596533 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:03:08.614927 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:03:08.642489 systemd[1]: Switching root. Sep 6 00:03:08.647000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:03:08.647000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:03:08.650000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:03:08.650000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:03:08.650000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:03:08.672668 iscsid[1202]: iscsid shutting down. Sep 6 00:03:08.674520 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Sep 6 00:03:08.674580 systemd-journald[310]: Journal stopped Sep 6 00:03:14.981763 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:03:14.981892 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:03:14.981935 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:03:14.981967 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:03:14.981996 kernel: SELinux: policy capability open_perms=1 Sep 6 00:03:14.982026 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:03:14.982064 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:03:14.982095 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:03:14.982126 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:03:14.982176 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:03:14.982210 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:03:14.982302 systemd[1]: Successfully loaded SELinux policy in 130.123ms. Sep 6 00:03:14.982369 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.484ms. Sep 6 00:03:14.982406 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:03:14.982440 systemd[1]: Detected virtualization amazon. Sep 6 00:03:14.982475 systemd[1]: Detected architecture arm64. Sep 6 00:03:14.982509 systemd[1]: Detected first boot. Sep 6 00:03:14.982542 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:03:14.982574 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:03:14.982612 kernel: kauditd_printk_skb: 46 callbacks suppressed Sep 6 00:03:14.982649 kernel: audit: type=1400 audit(1757116990.253:84): avc: denied { associate } for pid=1441 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:03:14.982686 kernel: audit: type=1300 audit(1757116990.253:84): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014766c a1=40000c8ae0 a2=40000cea00 a3=32 items=0 ppid=1424 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:14.982723 kernel: audit: type=1327 audit(1757116990.253:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:03:14.982757 kernel: audit: type=1400 audit(1757116990.259:85): avc: denied { associate } for pid=1441 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:03:14.982791 kernel: audit: type=1300 audit(1757116990.259:85): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000147749 a2=1ed a3=0 items=2 ppid=1424 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:14.982822 kernel: audit: type=1307 audit(1757116990.259:85): cwd="/" Sep 6 00:03:14.982852 kernel: audit: type=1302 audit(1757116990.259:85): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:03:14.982886 kernel: audit: type=1302 audit(1757116990.259:85): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:03:14.982918 kernel: audit: type=1327 audit(1757116990.259:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:03:14.982951 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:03:14.982982 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:03:14.983017 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:03:14.983052 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:03:14.983087 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:03:14.983121 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 6 00:03:14.983155 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:03:14.983189 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:03:14.983221 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:03:14.983281 systemd[1]: Created slice system-getty.slice. Sep 6 00:03:14.983323 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:03:14.983357 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:03:14.983395 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:03:14.983426 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:03:14.983456 systemd[1]: Created slice user.slice. Sep 6 00:03:14.983489 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:03:14.983520 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:03:14.983550 systemd[1]: Set up automount boot.automount. Sep 6 00:03:14.983580 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:03:14.983609 systemd[1]: Reached target integritysetup.target. Sep 6 00:03:14.983639 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:03:14.983672 systemd[1]: Reached target remote-fs.target. Sep 6 00:03:14.983707 systemd[1]: Reached target slices.target. Sep 6 00:03:14.983737 systemd[1]: Reached target swap.target. Sep 6 00:03:14.983769 systemd[1]: Reached target torcx.target. Sep 6 00:03:14.983799 systemd[1]: Reached target veritysetup.target. Sep 6 00:03:14.983828 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:03:14.983858 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:03:14.983892 kernel: audit: type=1400 audit(1757116994.499:86): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:03:14.983926 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:03:14.983956 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:03:14.983995 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:03:14.984026 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:03:14.984056 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:03:14.984085 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:03:14.984115 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:03:14.984145 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:03:14.984174 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:03:14.984204 systemd[1]: Mounting media.mount... Sep 6 00:03:14.984255 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:03:14.984296 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:03:14.984327 systemd[1]: Mounting tmp.mount... Sep 6 00:03:14.984359 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:03:14.984389 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:14.984419 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:03:14.984450 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:03:14.984480 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:03:14.984512 systemd[1]: Starting modprobe@drm.service... Sep 6 00:03:14.984542 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:03:14.984575 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:03:14.984605 systemd[1]: Starting modprobe@loop.service... Sep 6 00:03:14.984639 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:03:14.984676 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 6 00:03:14.984706 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 6 00:03:14.984738 systemd[1]: Starting systemd-journald.service... Sep 6 00:03:14.984768 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:03:14.984797 kernel: loop: module loaded Sep 6 00:03:14.984826 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:03:14.984859 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:03:14.984893 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:03:14.984925 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:03:14.984956 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:03:14.984986 systemd[1]: Mounted media.mount. Sep 6 00:03:14.985015 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:03:14.985049 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:03:14.985078 systemd[1]: Mounted tmp.mount. Sep 6 00:03:14.985108 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:03:14.985141 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:03:14.985171 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:03:14.985201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:03:14.989304 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:03:14.989365 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:03:14.989397 systemd[1]: Finished modprobe@drm.service. Sep 6 00:03:14.989427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:03:14.989457 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:03:14.989490 kernel: fuse: init (API version 7.34) Sep 6 00:03:14.989530 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:03:14.989561 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:03:14.989593 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:03:14.989623 systemd[1]: Finished modprobe@loop.service. Sep 6 00:03:14.989654 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:03:14.989688 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:03:14.989718 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:03:14.989748 systemd[1]: Reached target network-pre.target. Sep 6 00:03:14.989778 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:03:14.989807 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:03:14.989837 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:03:14.989869 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:03:14.989900 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:03:14.989933 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:03:14.989969 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:03:14.990001 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:03:14.990033 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:03:14.990063 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:03:14.990096 systemd-journald[1543]: Journal started Sep 6 00:03:14.991286 systemd-journald[1543]: Runtime Journal (/run/log/journal/ec2452b480d44e61a7feb7bb6fe5f8d9) is 8.0M, max 75.4M, 67.4M free. Sep 6 00:03:14.500000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 6 00:03:14.998754 systemd[1]: Started systemd-journald.service. Sep 6 00:03:14.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.950000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:03:14.950000 audit[1543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe5648db0 a2=4000 a3=1 items=0 ppid=1 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:14.950000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:03:14.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.998537 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:03:15.002796 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:03:15.005038 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:03:15.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.053492 systemd-journald[1543]: Time spent on flushing to /var/log/journal/ec2452b480d44e61a7feb7bb6fe5f8d9 is 68.645ms for 1077 entries. Sep 6 00:03:15.053492 systemd-journald[1543]: System Journal (/var/log/journal/ec2452b480d44e61a7feb7bb6fe5f8d9) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:03:15.146293 systemd-journald[1543]: Received client request to flush runtime journal. Sep 6 00:03:15.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.050087 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:03:15.083767 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:03:15.088584 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:03:15.148851 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:03:15.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.153191 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:03:15.157552 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:03:15.180937 udevadm[1592]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:03:15.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.272938 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:03:15.278256 kernel: kauditd_printk_skb: 26 callbacks suppressed Sep 6 00:03:15.278354 kernel: audit: type=1130 audit(1757116995.273:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.282724 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:03:15.410571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:03:15.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.425274 kernel: audit: type=1130 audit(1757116995.413:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.886187 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:03:15.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.890450 systemd[1]: Starting systemd-udevd.service... Sep 6 00:03:15.899314 kernel: audit: type=1130 audit(1757116995.886:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.934387 systemd-udevd[1598]: Using default interface naming scheme 'v252'. Sep 6 00:03:15.997208 systemd[1]: Started systemd-udevd.service. Sep 6 00:03:15.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.002052 systemd[1]: Starting systemd-networkd.service... Sep 6 00:03:16.013288 kernel: audit: type=1130 audit(1757116995.997:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.024613 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:03:16.102095 systemd[1]: Found device dev-ttyS0.device. Sep 6 00:03:16.133604 (udev-worker)[1617]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:03:16.150440 systemd[1]: Started systemd-userdbd.service. Sep 6 00:03:16.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.163273 kernel: audit: type=1130 audit(1757116996.151:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.381664 systemd-networkd[1599]: lo: Link UP Sep 6 00:03:16.381687 systemd-networkd[1599]: lo: Gained carrier Sep 6 00:03:16.382684 systemd-networkd[1599]: Enumeration completed Sep 6 00:03:16.382899 systemd[1]: Started systemd-networkd.service. Sep 6 00:03:16.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.387391 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:03:16.396490 kernel: audit: type=1130 audit(1757116996.383:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.389214 systemd-networkd[1599]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:03:16.405265 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:03:16.405525 systemd-networkd[1599]: eth0: Link UP Sep 6 00:03:16.405886 systemd-networkd[1599]: eth0: Gained carrier Sep 6 00:03:16.421576 systemd-networkd[1599]: eth0: DHCPv4 address 172.31.29.77/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 6 00:03:16.512351 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:03:16.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.524270 kernel: audit: type=1130 audit(1757116996.514:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.525442 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:03:16.530155 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:03:16.581132 lvm[1718]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:03:16.618902 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:03:16.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.621147 systemd[1]: Reached target cryptsetup.target. Sep 6 00:03:16.632335 kernel: audit: type=1130 audit(1757116996.617:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.633417 systemd[1]: Starting lvm2-activation.service... Sep 6 00:03:16.643847 lvm[1720]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:03:16.683130 systemd[1]: Finished lvm2-activation.service. Sep 6 00:03:16.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.685315 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:03:16.694979 kernel: audit: type=1130 audit(1757116996.683:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.695061 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:03:16.695105 systemd[1]: Reached target local-fs.target. Sep 6 00:03:16.696939 systemd[1]: Reached target machines.target. Sep 6 00:03:16.701042 systemd[1]: Starting ldconfig.service... Sep 6 00:03:16.704157 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:16.704290 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:16.706589 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:03:16.710428 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:03:16.715762 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:03:16.721068 systemd[1]: Starting systemd-sysext.service... Sep 6 00:03:16.727581 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1723 (bootctl) Sep 6 00:03:16.729880 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:03:16.757885 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:03:16.771090 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:03:16.771680 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:03:16.804288 kernel: loop0: detected capacity change from 0 to 203944 Sep 6 00:03:16.807720 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:03:16.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.823276 kernel: audit: type=1130 audit(1757116996.811:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.924013 systemd-fsck[1736]: fsck.fat 4.2 (2021-01-31) Sep 6 00:03:16.924013 systemd-fsck[1736]: /dev/nvme0n1p1: 236 files, 117310/258078 clusters Sep 6 00:03:16.928738 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:03:16.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.939682 systemd[1]: Mounting boot.mount... Sep 6 00:03:16.986264 systemd[1]: Mounted boot.mount. Sep 6 00:03:17.027672 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:03:17.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.075371 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:03:17.076819 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:03:17.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.099268 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:03:17.127380 kernel: loop1: detected capacity change from 0 to 203944 Sep 6 00:03:17.149689 (sd-sysext)[1756]: Using extensions 'kubernetes'. Sep 6 00:03:17.151338 (sd-sysext)[1756]: Merged extensions into '/usr'. Sep 6 00:03:17.190519 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:03:17.198715 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.201404 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:03:17.208662 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:03:17.214555 systemd[1]: Starting modprobe@loop.service... Sep 6 00:03:17.222498 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.222803 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:17.229755 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:03:17.234826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:03:17.235213 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:03:17.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.240918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:03:17.244664 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:03:17.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.248559 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:03:17.248916 systemd[1]: Finished modprobe@loop.service. Sep 6 00:03:17.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.254075 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:03:17.254649 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.256025 systemd[1]: Finished systemd-sysext.service. Sep 6 00:03:17.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.262754 systemd[1]: Starting ensure-sysext.service... Sep 6 00:03:17.272020 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:03:17.287339 systemd[1]: Reloading. Sep 6 00:03:17.308149 systemd-tmpfiles[1770]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:03:17.310379 systemd-tmpfiles[1770]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:03:17.325029 systemd-tmpfiles[1770]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:03:17.438073 /usr/lib/systemd/system-generators/torcx-generator[1790]: time="2025-09-06T00:03:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:03:17.440820 /usr/lib/systemd/system-generators/torcx-generator[1790]: time="2025-09-06T00:03:17Z" level=info msg="torcx already run" Sep 6 00:03:17.651602 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:03:17.651884 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:03:17.697603 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:03:17.864477 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:03:17.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.872874 systemd[1]: Starting audit-rules.service... Sep 6 00:03:17.881754 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:03:17.887392 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:03:17.895975 systemd[1]: Starting systemd-resolved.service... Sep 6 00:03:17.904364 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:03:17.913072 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:03:17.922197 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:03:17.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.941805 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.948077 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:03:17.952320 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:03:17.956962 systemd[1]: Starting modprobe@loop.service... Sep 6 00:03:17.961059 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.961424 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:17.961710 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:03:17.963952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:03:17.964372 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:03:17.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.972000 audit[1862]: SYSTEM_BOOT pid=1862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.979150 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.981886 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:03:17.988445 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.988745 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:17.988984 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:03:17.990848 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:03:17.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.995089 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:03:17.995747 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:03:17.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.998417 systemd-networkd[1599]: eth0: Gained IPv6LL Sep 6 00:03:18.009941 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:18.012586 systemd[1]: Starting modprobe@drm.service... Sep 6 00:03:18.025681 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:03:18.027729 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:18.028029 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:18.028330 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:03:18.030092 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:03:18.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:18.037025 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:03:18.037448 systemd[1]: Finished modprobe@loop.service. Sep 6 00:03:18.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:18.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:18.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:18.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:18.048941 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:03:18.049336 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:03:18.054930 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:03:18.055294 systemd[1]: Finished modprobe@drm.service. Sep 6 00:03:18.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:18.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:18.062412 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:03:18.064372 systemd[1]: Finished ensure-sysext.service. Sep 6 00:03:18.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:18.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:18.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:18.073827 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:03:18.074250 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:03:18.078862 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:03:18.110867 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:03:18.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:18.167000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:03:18.167000 audit[1892]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd8d97740 a2=420 a3=0 items=0 ppid=1853 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:18.167000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:03:18.169052 augenrules[1892]: No rules Sep 6 00:03:18.170436 systemd[1]: Finished audit-rules.service. Sep 6 00:03:18.192288 ldconfig[1722]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:03:18.200547 systemd[1]: Finished ldconfig.service. Sep 6 00:03:18.204968 systemd[1]: Starting systemd-update-done.service... Sep 6 00:03:18.227556 systemd[1]: Finished systemd-update-done.service. Sep 6 00:03:18.261167 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:03:18.263473 systemd[1]: Reached target time-set.target. Sep 6 00:03:18.279559 systemd-resolved[1856]: Positive Trust Anchors: Sep 6 00:03:18.279588 systemd-resolved[1856]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:03:18.279641 systemd-resolved[1856]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:03:18.336510 systemd-resolved[1856]: Defaulting to hostname 'linux'. Sep 6 00:03:18.339589 systemd[1]: Started systemd-resolved.service. Sep 6 00:03:18.341578 systemd[1]: Reached target network.target. Sep 6 00:03:18.343360 systemd[1]: Reached target network-online.target. Sep 6 00:03:18.345289 systemd[1]: Reached target nss-lookup.target. Sep 6 00:03:18.347112 systemd[1]: Reached target sysinit.target. Sep 6 00:03:18.349050 systemd[1]: Started motdgen.path. Sep 6 00:03:18.350710 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:03:18.353391 systemd[1]: Started logrotate.timer. Sep 6 00:03:18.355125 systemd[1]: Started mdadm.timer. Sep 6 00:03:18.356852 systemd-timesyncd[1857]: Contacted time server 135.148.100.14:123 (0.flatcar.pool.ntp.org). Sep 6 00:03:18.356976 systemd-timesyncd[1857]: Initial clock synchronization to Sat 2025-09-06 00:03:17.981552 UTC. Sep 6 00:03:18.357050 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:03:18.358994 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:03:18.359043 systemd[1]: Reached target paths.target. Sep 6 00:03:18.360718 systemd[1]: Reached target timers.target. Sep 6 00:03:18.368335 systemd[1]: Listening on dbus.socket. Sep 6 00:03:18.372208 systemd[1]: Starting docker.socket... Sep 6 00:03:18.376590 systemd[1]: Listening on sshd.socket. Sep 6 00:03:18.378688 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:18.379672 systemd[1]: Listening on docker.socket. Sep 6 00:03:18.381827 systemd[1]: Reached target sockets.target. Sep 6 00:03:18.383875 systemd[1]: Reached target basic.target. Sep 6 00:03:18.386061 systemd[1]: System is tainted: cgroupsv1 Sep 6 00:03:18.386393 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:03:18.386583 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:03:18.389183 systemd[1]: Started amazon-ssm-agent.service. Sep 6 00:03:18.395138 systemd[1]: Starting containerd.service... Sep 6 00:03:18.399139 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:03:18.406210 systemd[1]: Starting dbus.service... Sep 6 00:03:18.410628 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:03:18.416162 systemd[1]: Starting extend-filesystems.service... Sep 6 00:03:18.418390 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:03:18.423731 systemd[1]: Starting kubelet.service... Sep 6 00:03:18.442849 systemd[1]: Starting motdgen.service... Sep 6 00:03:18.509448 jq[1909]: false Sep 6 00:03:18.453160 systemd[1]: Started nvidia.service. Sep 6 00:03:18.458022 systemd[1]: Starting prepare-helm.service... Sep 6 00:03:18.469208 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:03:18.475800 systemd[1]: Starting sshd-keygen.service... Sep 6 00:03:18.490314 systemd[1]: Starting systemd-logind.service... Sep 6 00:03:18.492083 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:18.492249 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:03:18.496745 systemd[1]: Starting update-engine.service... Sep 6 00:03:18.502937 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:03:18.514175 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:03:18.514768 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:03:18.622653 tar[1931]: linux-arm64/helm Sep 6 00:03:18.583922 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:03:18.623278 jq[1924]: true Sep 6 00:03:18.584515 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:03:18.650535 jq[1937]: true Sep 6 00:03:18.705864 dbus-daemon[1908]: [system] SELinux support is enabled Sep 6 00:03:18.711006 systemd[1]: Started dbus.service. Sep 6 00:03:18.716367 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:03:18.716415 systemd[1]: Reached target system-config.target. Sep 6 00:03:18.718594 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:03:18.718627 systemd[1]: Reached target user-config.target. Sep 6 00:03:18.735075 extend-filesystems[1910]: Found loop1 Sep 6 00:03:18.737348 extend-filesystems[1910]: Found nvme0n1 Sep 6 00:03:18.737348 extend-filesystems[1910]: Found nvme0n1p1 Sep 6 00:03:18.737348 extend-filesystems[1910]: Found nvme0n1p2 Sep 6 00:03:18.737348 extend-filesystems[1910]: Found nvme0n1p3 Sep 6 00:03:18.737348 extend-filesystems[1910]: Found usr Sep 6 00:03:18.737348 extend-filesystems[1910]: Found nvme0n1p4 Sep 6 00:03:18.737348 extend-filesystems[1910]: Found nvme0n1p6 Sep 6 00:03:18.756448 dbus-daemon[1908]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1599 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 6 00:03:18.763563 extend-filesystems[1910]: Found nvme0n1p7 Sep 6 00:03:18.763563 extend-filesystems[1910]: Found nvme0n1p9 Sep 6 00:03:18.763563 extend-filesystems[1910]: Checking size of /dev/nvme0n1p9 Sep 6 00:03:18.762588 systemd[1]: Starting systemd-hostnamed.service... Sep 6 00:03:18.822487 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:03:18.823016 systemd[1]: Finished motdgen.service. Sep 6 00:03:18.836898 extend-filesystems[1910]: Resized partition /dev/nvme0n1p9 Sep 6 00:03:18.859219 extend-filesystems[1978]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:03:18.912273 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 6 00:03:19.010341 update_engine[1923]: I0906 00:03:18.968819 1923 main.cc:92] Flatcar Update Engine starting Sep 6 00:03:19.010341 update_engine[1923]: I0906 00:03:19.007086 1923 update_check_scheduler.cc:74] Next update check in 4m5s Sep 6 00:03:18.998674 systemd[1]: Started update-engine.service. Sep 6 00:03:19.004119 systemd[1]: Started locksmithd.service. Sep 6 00:03:19.013294 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 6 00:03:19.029977 extend-filesystems[1978]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 6 00:03:19.029977 extend-filesystems[1978]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:03:19.029977 extend-filesystems[1978]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 6 00:03:19.058443 extend-filesystems[1910]: Resized filesystem in /dev/nvme0n1p9 Sep 6 00:03:19.040626 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:03:19.066865 bash[1979]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:03:19.048715 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:03:19.049225 systemd[1]: Finished extend-filesystems.service. Sep 6 00:03:19.121263 amazon-ssm-agent[1904]: 2025/09/06 00:03:19 Failed to load instance info from vault. RegistrationKey does not exist. Sep 6 00:03:19.138754 amazon-ssm-agent[1904]: Initializing new seelog logger Sep 6 00:03:19.143568 amazon-ssm-agent[1904]: New Seelog Logger Creation Complete Sep 6 00:03:19.143897 amazon-ssm-agent[1904]: 2025/09/06 00:03:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 6 00:03:19.145503 amazon-ssm-agent[1904]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 6 00:03:19.146124 amazon-ssm-agent[1904]: 2025/09/06 00:03:19 processing appconfig overrides Sep 6 00:03:19.177922 env[1935]: time="2025-09-06T00:03:19.177217466Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:03:19.224768 systemd[1]: nvidia.service: Deactivated successfully. Sep 6 00:03:19.290195 systemd-logind[1921]: Watching system buttons on /dev/input/event0 (Power Button) Sep 6 00:03:19.290773 systemd-logind[1921]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 6 00:03:19.296793 systemd-logind[1921]: New seat seat0. Sep 6 00:03:19.310550 systemd[1]: Started systemd-logind.service. Sep 6 00:03:19.334460 env[1935]: time="2025-09-06T00:03:19.334394815Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:03:19.335738 env[1935]: time="2025-09-06T00:03:19.335676133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:19.344395 env[1935]: time="2025-09-06T00:03:19.344314216Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:03:19.344395 env[1935]: time="2025-09-06T00:03:19.344386225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:19.346666 env[1935]: time="2025-09-06T00:03:19.346596399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:03:19.346666 env[1935]: time="2025-09-06T00:03:19.346656284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:19.346913 env[1935]: time="2025-09-06T00:03:19.346690871Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:03:19.346913 env[1935]: time="2025-09-06T00:03:19.346717966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:19.347969 env[1935]: time="2025-09-06T00:03:19.347908060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:19.351225 env[1935]: time="2025-09-06T00:03:19.351156006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:19.351552 env[1935]: time="2025-09-06T00:03:19.351499639Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:03:19.351621 env[1935]: time="2025-09-06T00:03:19.351549941Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:03:19.351715 env[1935]: time="2025-09-06T00:03:19.351675144Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:03:19.351779 env[1935]: time="2025-09-06T00:03:19.351711138Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:03:19.363371 env[1935]: time="2025-09-06T00:03:19.363280984Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:03:19.363530 env[1935]: time="2025-09-06T00:03:19.363380946Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:03:19.363530 env[1935]: time="2025-09-06T00:03:19.363436817Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:03:19.363648 env[1935]: time="2025-09-06T00:03:19.363612024Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.363709 env[1935]: time="2025-09-06T00:03:19.363663595Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.363709 env[1935]: time="2025-09-06T00:03:19.363695974Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.363808 env[1935]: time="2025-09-06T00:03:19.363727507Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.364217 env[1935]: time="2025-09-06T00:03:19.364170324Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.364344 env[1935]: time="2025-09-06T00:03:19.364260449Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.364344 env[1935]: time="2025-09-06T00:03:19.364297540Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.364344 env[1935]: time="2025-09-06T00:03:19.364326511Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.364486 env[1935]: time="2025-09-06T00:03:19.364355859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:03:19.364613 env[1935]: time="2025-09-06T00:03:19.364570891Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:03:19.364793 env[1935]: time="2025-09-06T00:03:19.364751280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:03:19.365339 env[1935]: time="2025-09-06T00:03:19.365294265Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.365415 env[1935]: time="2025-09-06T00:03:19.365357925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.365415 env[1935]: time="2025-09-06T00:03:19.365389778Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:03:19.365630 env[1935]: time="2025-09-06T00:03:19.365591428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.365714 env[1935]: time="2025-09-06T00:03:19.365636068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.365714 env[1935]: time="2025-09-06T00:03:19.365668012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.365714 env[1935]: time="2025-09-06T00:03:19.365697417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.365850 env[1935]: time="2025-09-06T00:03:19.365727612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.365850 env[1935]: time="2025-09-06T00:03:19.365755279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.365850 env[1935]: time="2025-09-06T00:03:19.365782648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.365850 env[1935]: time="2025-09-06T00:03:19.365809217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.365850 env[1935]: time="2025-09-06T00:03:19.365841150Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:03:19.366127 env[1935]: time="2025-09-06T00:03:19.366087589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.366209 env[1935]: time="2025-09-06T00:03:19.366131462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.366209 env[1935]: time="2025-09-06T00:03:19.366162812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.366342 env[1935]: time="2025-09-06T00:03:19.366201573Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:03:19.373022 env[1935]: time="2025-09-06T00:03:19.372794602Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:03:19.376348 env[1935]: time="2025-09-06T00:03:19.374962664Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:03:19.376348 env[1935]: time="2025-09-06T00:03:19.375064238Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:03:19.376348 env[1935]: time="2025-09-06T00:03:19.375159145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.379752 env[1935]: time="2025-09-06T00:03:19.379569004Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:03:19.381733 env[1935]: time="2025-09-06T00:03:19.379746980Z" level=info msg="Connect containerd service" Sep 6 00:03:19.381733 env[1935]: time="2025-09-06T00:03:19.379868226Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:03:19.384405 env[1935]: time="2025-09-06T00:03:19.384284708Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:03:19.387418 env[1935]: time="2025-09-06T00:03:19.387301312Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:03:19.387850 env[1935]: time="2025-09-06T00:03:19.387803477Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:03:19.388997 env[1935]: time="2025-09-06T00:03:19.387965841Z" level=info msg="containerd successfully booted in 0.216090s" Sep 6 00:03:19.388101 systemd[1]: Started containerd.service. Sep 6 00:03:19.391682 env[1935]: time="2025-09-06T00:03:19.391592567Z" level=info msg="Start subscribing containerd event" Sep 6 00:03:19.391853 env[1935]: time="2025-09-06T00:03:19.391696063Z" level=info msg="Start recovering state" Sep 6 00:03:19.395733 env[1935]: time="2025-09-06T00:03:19.395669659Z" level=info msg="Start event monitor" Sep 6 00:03:19.395863 env[1935]: time="2025-09-06T00:03:19.395762015Z" level=info msg="Start snapshots syncer" Sep 6 00:03:19.395863 env[1935]: time="2025-09-06T00:03:19.395788126Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:03:19.395863 env[1935]: time="2025-09-06T00:03:19.395837124Z" level=info msg="Start streaming server" Sep 6 00:03:19.556109 dbus-daemon[1908]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 6 00:03:19.556355 systemd[1]: Started systemd-hostnamed.service. Sep 6 00:03:19.566911 dbus-daemon[1908]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1962 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 6 00:03:19.571909 systemd[1]: Starting polkit.service... Sep 6 00:03:19.610795 polkitd[2030]: Started polkitd version 121 Sep 6 00:03:19.643091 polkitd[2030]: Loading rules from directory /etc/polkit-1/rules.d Sep 6 00:03:19.643214 polkitd[2030]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 6 00:03:19.645169 polkitd[2030]: Finished loading, compiling and executing 2 rules Sep 6 00:03:19.645966 dbus-daemon[1908]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 6 00:03:19.646224 systemd[1]: Started polkit.service. Sep 6 00:03:19.652080 polkitd[2030]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 6 00:03:19.673819 coreos-metadata[1906]: Sep 06 00:03:19.673 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 6 00:03:19.675961 coreos-metadata[1906]: Sep 06 00:03:19.675 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 6 00:03:19.676820 coreos-metadata[1906]: Sep 06 00:03:19.676 INFO Fetch successful Sep 6 00:03:19.676942 coreos-metadata[1906]: Sep 06 00:03:19.676 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 6 00:03:19.678526 coreos-metadata[1906]: Sep 06 00:03:19.678 INFO Fetch successful Sep 6 00:03:19.684729 unknown[1906]: wrote ssh authorized keys file for user: core Sep 6 00:03:19.685547 systemd-hostnamed[1962]: Hostname set to (transient) Sep 6 00:03:19.685690 systemd-resolved[1856]: System hostname changed to 'ip-172-31-29-77'. Sep 6 00:03:19.722016 update-ssh-keys[2051]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:03:19.723152 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:03:19.869416 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO Create new startup processor Sep 6 00:03:19.870344 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [LongRunningPluginsManager] registered plugins: {} Sep 6 00:03:19.870446 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO Initializing bookkeeping folders Sep 6 00:03:19.870446 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO removing the completed state files Sep 6 00:03:19.870446 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO Initializing bookkeeping folders for long running plugins Sep 6 00:03:19.870446 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 6 00:03:19.870714 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO Initializing healthcheck folders for long running plugins Sep 6 00:03:19.870714 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO Initializing locations for inventory plugin Sep 6 00:03:19.870714 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO Initializing default location for custom inventory Sep 6 00:03:19.870714 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO Initializing default location for file inventory Sep 6 00:03:19.870714 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO Initializing default location for role inventory Sep 6 00:03:19.870714 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO Init the cloudwatchlogs publisher Sep 6 00:03:19.870714 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [instanceID=i-01ee6a987e2580dc5] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 6 00:03:19.870714 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [instanceID=i-01ee6a987e2580dc5] Successfully loaded platform independent plugin aws:runDockerAction Sep 6 00:03:19.870714 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [instanceID=i-01ee6a987e2580dc5] Successfully loaded platform independent plugin aws:refreshAssociation Sep 6 00:03:19.870714 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [instanceID=i-01ee6a987e2580dc5] Successfully loaded platform independent plugin aws:configurePackage Sep 6 00:03:19.870714 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [instanceID=i-01ee6a987e2580dc5] Successfully loaded platform independent plugin aws:softwareInventory Sep 6 00:03:19.870714 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [instanceID=i-01ee6a987e2580dc5] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 6 00:03:19.871289 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [instanceID=i-01ee6a987e2580dc5] Successfully loaded platform independent plugin aws:configureDocker Sep 6 00:03:19.871289 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [instanceID=i-01ee6a987e2580dc5] Successfully loaded platform independent plugin aws:downloadContent Sep 6 00:03:19.871289 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [instanceID=i-01ee6a987e2580dc5] Successfully loaded platform independent plugin aws:runDocument Sep 6 00:03:19.871289 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [instanceID=i-01ee6a987e2580dc5] Successfully loaded platform dependent plugin aws:runShellScript Sep 6 00:03:19.871289 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 6 00:03:19.871289 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO OS: linux, Arch: arm64 Sep 6 00:03:19.878332 amazon-ssm-agent[1904]: datastore file /var/lib/amazon/ssm/i-01ee6a987e2580dc5/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 6 00:03:19.969218 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessageGatewayService] Starting session document processing engine... Sep 6 00:03:20.063954 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 6 00:03:20.158411 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 6 00:03:20.252871 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-01ee6a987e2580dc5, requestId: 85582f00-e60c-4d24-a7a1-5ca8a8b981eb Sep 6 00:03:20.347584 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] Starting document processing engine... Sep 6 00:03:20.442600 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 6 00:03:20.537630 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 6 00:03:20.558382 tar[1931]: linux-arm64/LICENSE Sep 6 00:03:20.559059 tar[1931]: linux-arm64/README.md Sep 6 00:03:20.568857 systemd[1]: Finished prepare-helm.service. Sep 6 00:03:20.632901 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] Starting message polling Sep 6 00:03:20.702263 locksmithd[1993]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:03:20.728398 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 6 00:03:20.824088 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [instanceID=i-01ee6a987e2580dc5] Starting association polling Sep 6 00:03:20.919953 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 6 00:03:21.016104 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 6 00:03:21.112334 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 6 00:03:21.208810 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 6 00:03:21.305561 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 6 00:03:21.350045 sshd_keygen[1949]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:03:21.387907 systemd[1]: Finished sshd-keygen.service. Sep 6 00:03:21.392835 systemd[1]: Starting issuegen.service... Sep 6 00:03:21.403804 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessageGatewayService] listening reply. Sep 6 00:03:21.405927 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:03:21.406485 systemd[1]: Finished issuegen.service. Sep 6 00:03:21.411457 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:03:21.428147 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:03:21.433275 systemd[1]: Started getty@tty1.service. Sep 6 00:03:21.437899 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:03:21.442785 systemd[1]: Reached target getty.target. Sep 6 00:03:21.500495 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [HealthCheck] HealthCheck reporting agent health. Sep 6 00:03:21.597785 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [OfflineService] Starting document processing engine... Sep 6 00:03:21.695170 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [OfflineService] [EngineProcessor] Starting Sep 6 00:03:21.792874 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [OfflineService] [EngineProcessor] Initial processing Sep 6 00:03:21.890759 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [OfflineService] Starting message polling Sep 6 00:03:21.988752 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [OfflineService] Starting send replies to MDS Sep 6 00:03:22.087013 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 6 00:03:22.185545 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 6 00:03:22.284126 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 6 00:03:22.383015 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [StartupProcessor] Executing startup processor tasks Sep 6 00:03:22.482128 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 6 00:03:22.581356 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 6 00:03:22.680742 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 6 00:03:22.780430 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-01ee6a987e2580dc5?role=subscribe&stream=input Sep 6 00:03:22.880319 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-01ee6a987e2580dc5?role=subscribe&stream=input Sep 6 00:03:22.937491 systemd[1]: Started kubelet.service. Sep 6 00:03:22.940247 systemd[1]: Reached target multi-user.target. Sep 6 00:03:22.945753 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:03:22.963669 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:03:22.964183 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:03:22.972561 systemd[1]: Startup finished in 10.215s (kernel) + 13.351s (userspace) = 23.566s. Sep 6 00:03:22.980349 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessageGatewayService] Starting receiving message from control channel Sep 6 00:03:23.080597 amazon-ssm-agent[1904]: 2025-09-06 00:03:19 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 6 00:03:24.349731 kubelet[2153]: E0906 00:03:24.349643 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:03:24.353100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:03:24.353547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:03:26.899529 systemd[1]: Created slice system-sshd.slice. Sep 6 00:03:26.901874 systemd[1]: Started sshd@0-172.31.29.77:22-147.75.109.163:47434.service. Sep 6 00:03:27.174536 sshd[2162]: Accepted publickey for core from 147.75.109.163 port 47434 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:27.180037 sshd[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:27.199594 systemd[1]: Created slice user-500.slice. Sep 6 00:03:27.201700 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:03:27.209359 systemd-logind[1921]: New session 1 of user core. Sep 6 00:03:27.226041 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:03:27.230048 systemd[1]: Starting user@500.service... Sep 6 00:03:27.240424 (systemd)[2167]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:27.429131 systemd[2167]: Queued start job for default target default.target. Sep 6 00:03:27.430313 systemd[2167]: Reached target paths.target. Sep 6 00:03:27.430368 systemd[2167]: Reached target sockets.target. Sep 6 00:03:27.430400 systemd[2167]: Reached target timers.target. Sep 6 00:03:27.430430 systemd[2167]: Reached target basic.target. Sep 6 00:03:27.430628 systemd[1]: Started user@500.service. Sep 6 00:03:27.431515 systemd[2167]: Reached target default.target. Sep 6 00:03:27.431761 systemd[2167]: Startup finished in 179ms. Sep 6 00:03:27.432449 systemd[1]: Started session-1.scope. Sep 6 00:03:27.578460 systemd[1]: Started sshd@1-172.31.29.77:22-147.75.109.163:47446.service. Sep 6 00:03:27.751710 sshd[2176]: Accepted publickey for core from 147.75.109.163 port 47446 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:27.754747 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:27.763382 systemd[1]: Started session-2.scope. Sep 6 00:03:27.763781 systemd-logind[1921]: New session 2 of user core. Sep 6 00:03:27.892847 sshd[2176]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:27.897887 systemd[1]: sshd@1-172.31.29.77:22-147.75.109.163:47446.service: Deactivated successfully. Sep 6 00:03:27.900465 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:03:27.902351 systemd-logind[1921]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:03:27.905056 systemd-logind[1921]: Removed session 2. Sep 6 00:03:27.919548 systemd[1]: Started sshd@2-172.31.29.77:22-147.75.109.163:47450.service. Sep 6 00:03:28.086385 sshd[2183]: Accepted publickey for core from 147.75.109.163 port 47450 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:28.089407 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:28.097543 systemd[1]: Started session-3.scope. Sep 6 00:03:28.098091 systemd-logind[1921]: New session 3 of user core. Sep 6 00:03:28.218518 sshd[2183]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:28.224058 systemd[1]: sshd@2-172.31.29.77:22-147.75.109.163:47450.service: Deactivated successfully. Sep 6 00:03:28.226506 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:03:28.227932 systemd-logind[1921]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:03:28.230503 systemd-logind[1921]: Removed session 3. Sep 6 00:03:28.242203 systemd[1]: Started sshd@3-172.31.29.77:22-147.75.109.163:47460.service. Sep 6 00:03:28.406705 sshd[2190]: Accepted publickey for core from 147.75.109.163 port 47460 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:28.409741 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:28.418179 systemd[1]: Started session-4.scope. Sep 6 00:03:28.419085 systemd-logind[1921]: New session 4 of user core. Sep 6 00:03:28.547427 sshd[2190]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:28.553448 systemd[1]: sshd@3-172.31.29.77:22-147.75.109.163:47460.service: Deactivated successfully. Sep 6 00:03:28.554815 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:03:28.555798 systemd-logind[1921]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:03:28.557476 systemd-logind[1921]: Removed session 4. Sep 6 00:03:28.572058 systemd[1]: Started sshd@4-172.31.29.77:22-147.75.109.163:47474.service. Sep 6 00:03:28.741138 sshd[2197]: Accepted publickey for core from 147.75.109.163 port 47474 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:28.744124 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:28.752097 systemd-logind[1921]: New session 5 of user core. Sep 6 00:03:28.752966 systemd[1]: Started session-5.scope. Sep 6 00:03:28.878034 amazon-ssm-agent[1904]: 2025-09-06 00:03:28 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 6 00:03:28.891205 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:03:28.892305 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:03:28.973002 systemd[1]: Starting docker.service... Sep 6 00:03:29.091505 env[2211]: time="2025-09-06T00:03:29.091353715Z" level=info msg="Starting up" Sep 6 00:03:29.094798 env[2211]: time="2025-09-06T00:03:29.094746671Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:03:29.094798 env[2211]: time="2025-09-06T00:03:29.094789658Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:03:29.094994 env[2211]: time="2025-09-06T00:03:29.094837919Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:03:29.094994 env[2211]: time="2025-09-06T00:03:29.094862417Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:03:29.099513 env[2211]: time="2025-09-06T00:03:29.099466786Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:03:29.099725 env[2211]: time="2025-09-06T00:03:29.099696856Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:03:29.099857 env[2211]: time="2025-09-06T00:03:29.099824585Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:03:29.099963 env[2211]: time="2025-09-06T00:03:29.099936041Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:03:29.111278 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4044755688-merged.mount: Deactivated successfully. Sep 6 00:03:29.325032 env[2211]: time="2025-09-06T00:03:29.324986368Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 6 00:03:29.325368 env[2211]: time="2025-09-06T00:03:29.325340102Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 6 00:03:29.325744 env[2211]: time="2025-09-06T00:03:29.325718179Z" level=info msg="Loading containers: start." Sep 6 00:03:29.599309 kernel: Initializing XFRM netlink socket Sep 6 00:03:29.652321 env[2211]: time="2025-09-06T00:03:29.652276087Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:03:29.656526 (udev-worker)[2223]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:03:29.776510 systemd-networkd[1599]: docker0: Link UP Sep 6 00:03:29.806616 env[2211]: time="2025-09-06T00:03:29.806568961Z" level=info msg="Loading containers: done." Sep 6 00:03:29.843462 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3493614452-merged.mount: Deactivated successfully. Sep 6 00:03:29.856883 env[2211]: time="2025-09-06T00:03:29.856728049Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:03:29.857567 env[2211]: time="2025-09-06T00:03:29.857530237Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:03:29.857979 env[2211]: time="2025-09-06T00:03:29.857931758Z" level=info msg="Daemon has completed initialization" Sep 6 00:03:29.894573 systemd[1]: Started docker.service. Sep 6 00:03:29.908488 env[2211]: time="2025-09-06T00:03:29.908418328Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:03:31.292540 env[1935]: time="2025-09-06T00:03:31.292444265Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:03:31.955504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2256973115.mount: Deactivated successfully. Sep 6 00:03:33.748620 env[1935]: time="2025-09-06T00:03:33.748560597Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:33.751104 env[1935]: time="2025-09-06T00:03:33.751056298Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:33.754519 env[1935]: time="2025-09-06T00:03:33.754455831Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:33.757910 env[1935]: time="2025-09-06T00:03:33.757848491Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:33.761870 env[1935]: time="2025-09-06T00:03:33.761604706Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 6 00:03:33.765645 env[1935]: time="2025-09-06T00:03:33.765597839Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:03:34.604978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:03:34.605340 systemd[1]: Stopped kubelet.service. Sep 6 00:03:34.608935 systemd[1]: Starting kubelet.service... Sep 6 00:03:35.236925 systemd[1]: Started kubelet.service. Sep 6 00:03:35.355632 kubelet[2344]: E0906 00:03:35.349330 2344 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:03:35.362214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:03:35.362639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:03:35.768328 env[1935]: time="2025-09-06T00:03:35.768225029Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:35.773067 env[1935]: time="2025-09-06T00:03:35.772174656Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:35.776935 env[1935]: time="2025-09-06T00:03:35.776854438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:35.781466 env[1935]: time="2025-09-06T00:03:35.781404132Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:35.783829 env[1935]: time="2025-09-06T00:03:35.783781249Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 6 00:03:35.784737 env[1935]: time="2025-09-06T00:03:35.784690515Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:03:37.325258 env[1935]: time="2025-09-06T00:03:37.325181438Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:37.327702 env[1935]: time="2025-09-06T00:03:37.327654359Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:37.331092 env[1935]: time="2025-09-06T00:03:37.331027205Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:37.334599 env[1935]: time="2025-09-06T00:03:37.334537549Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:37.336496 env[1935]: time="2025-09-06T00:03:37.336432335Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 6 00:03:37.337356 env[1935]: time="2025-09-06T00:03:37.337313178Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:03:38.692809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4022049497.mount: Deactivated successfully. Sep 6 00:03:39.586349 env[1935]: time="2025-09-06T00:03:39.586287674Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:39.592254 env[1935]: time="2025-09-06T00:03:39.592166293Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:39.599798 env[1935]: time="2025-09-06T00:03:39.599729961Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:39.603618 env[1935]: time="2025-09-06T00:03:39.603545239Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 6 00:03:39.603824 env[1935]: time="2025-09-06T00:03:39.602565904Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:39.604487 env[1935]: time="2025-09-06T00:03:39.604425883Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:03:40.148172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2320688546.mount: Deactivated successfully. Sep 6 00:03:41.476414 env[1935]: time="2025-09-06T00:03:41.476333263Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.484396 env[1935]: time="2025-09-06T00:03:41.484325909Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.490938 env[1935]: time="2025-09-06T00:03:41.490870618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.495913 env[1935]: time="2025-09-06T00:03:41.495855948Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.497512 env[1935]: time="2025-09-06T00:03:41.497456421Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 6 00:03:41.498833 env[1935]: time="2025-09-06T00:03:41.498787194Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:03:42.612441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2630327453.mount: Deactivated successfully. Sep 6 00:03:42.621841 env[1935]: time="2025-09-06T00:03:42.621786842Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:42.626191 env[1935]: time="2025-09-06T00:03:42.626145129Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:42.629598 env[1935]: time="2025-09-06T00:03:42.629552616Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:42.632810 env[1935]: time="2025-09-06T00:03:42.632765204Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:42.633862 env[1935]: time="2025-09-06T00:03:42.633821088Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 6 00:03:42.634681 env[1935]: time="2025-09-06T00:03:42.634636295Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:03:43.299638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222097554.mount: Deactivated successfully. Sep 6 00:03:45.466106 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:03:45.466471 systemd[1]: Stopped kubelet.service. Sep 6 00:03:45.469299 systemd[1]: Starting kubelet.service... Sep 6 00:03:45.795586 systemd[1]: Started kubelet.service. Sep 6 00:03:45.893784 kubelet[2359]: E0906 00:03:45.893725 2359 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:03:45.897462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:03:45.897856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:03:46.106970 env[1935]: time="2025-09-06T00:03:46.106804710Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:46.113864 env[1935]: time="2025-09-06T00:03:46.112724196Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:46.120269 env[1935]: time="2025-09-06T00:03:46.120170283Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:46.129862 env[1935]: time="2025-09-06T00:03:46.129805519Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:46.131047 env[1935]: time="2025-09-06T00:03:46.131001197Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 6 00:03:49.718346 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 6 00:03:53.052929 systemd[1]: Stopped kubelet.service. Sep 6 00:03:53.058341 systemd[1]: Starting kubelet.service... Sep 6 00:03:53.123007 systemd[1]: Reloading. Sep 6 00:03:53.274277 /usr/lib/systemd/system-generators/torcx-generator[2417]: time="2025-09-06T00:03:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:03:53.274343 /usr/lib/systemd/system-generators/torcx-generator[2417]: time="2025-09-06T00:03:53Z" level=info msg="torcx already run" Sep 6 00:03:53.503069 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:03:53.503110 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:03:53.543507 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:03:53.782126 systemd[1]: Started kubelet.service. Sep 6 00:03:53.789630 systemd[1]: Stopping kubelet.service... Sep 6 00:03:53.792067 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:03:53.792663 systemd[1]: Stopped kubelet.service. Sep 6 00:03:53.802688 systemd[1]: Starting kubelet.service... Sep 6 00:03:54.108084 systemd[1]: Started kubelet.service. Sep 6 00:03:54.209896 kubelet[2491]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:03:54.209896 kubelet[2491]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:03:54.209896 kubelet[2491]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:03:54.210562 kubelet[2491]: I0906 00:03:54.210020 2491 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:03:55.757745 kubelet[2491]: I0906 00:03:55.757695 2491 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:03:55.758432 kubelet[2491]: I0906 00:03:55.758405 2491 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:03:55.758965 kubelet[2491]: I0906 00:03:55.758938 2491 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:03:55.822414 kubelet[2491]: E0906 00:03:55.822358 2491 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.77:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:55.823359 kubelet[2491]: I0906 00:03:55.823319 2491 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:03:55.834783 kubelet[2491]: E0906 00:03:55.834704 2491 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:03:55.834783 kubelet[2491]: I0906 00:03:55.834770 2491 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:03:55.842361 kubelet[2491]: I0906 00:03:55.842311 2491 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:03:55.843147 kubelet[2491]: I0906 00:03:55.843115 2491 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:03:55.843490 kubelet[2491]: I0906 00:03:55.843435 2491 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:03:55.843776 kubelet[2491]: I0906 00:03:55.843492 2491 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-77","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:03:55.843935 kubelet[2491]: I0906 00:03:55.843921 2491 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:03:55.843997 kubelet[2491]: I0906 00:03:55.843949 2491 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:03:55.844333 kubelet[2491]: I0906 00:03:55.844304 2491 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:03:55.855042 kubelet[2491]: I0906 00:03:55.854988 2491 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:03:55.855042 kubelet[2491]: I0906 00:03:55.855047 2491 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:03:55.855344 kubelet[2491]: I0906 00:03:55.855089 2491 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:03:55.855344 kubelet[2491]: I0906 00:03:55.855126 2491 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:03:55.868678 kubelet[2491]: W0906 00:03:55.868455 2491 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-77&limit=500&resourceVersion=0": dial tcp 172.31.29.77:6443: connect: connection refused Sep 6 00:03:55.868827 kubelet[2491]: E0906 00:03:55.868707 2491 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-77&limit=500&resourceVersion=0\": dial tcp 172.31.29.77:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:55.876076 kubelet[2491]: W0906 00:03:55.876009 2491 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.77:6443: connect: connection refused Sep 6 00:03:55.876354 kubelet[2491]: E0906 00:03:55.876319 2491 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.77:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:55.878364 kubelet[2491]: I0906 00:03:55.878311 2491 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:03:55.879723 kubelet[2491]: I0906 00:03:55.879675 2491 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:03:55.879964 kubelet[2491]: W0906 00:03:55.879928 2491 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:03:55.881865 kubelet[2491]: I0906 00:03:55.881692 2491 server.go:1274] "Started kubelet" Sep 6 00:03:55.917300 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:03:55.917471 kubelet[2491]: E0906 00:03:55.910371 2491 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.77:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.77:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-77.186288a270836e40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-77,UID:ip-172-31-29-77,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-77,},FirstTimestamp:2025-09-06 00:03:55.881655872 +0000 UTC m=+1.758830615,LastTimestamp:2025-09-06 00:03:55.881655872 +0000 UTC m=+1.758830615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-77,}" Sep 6 00:03:55.918512 kubelet[2491]: I0906 00:03:55.918367 2491 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:03:55.918724 kubelet[2491]: I0906 00:03:55.918669 2491 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:03:55.920630 kubelet[2491]: I0906 00:03:55.920595 2491 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:03:55.922720 kubelet[2491]: I0906 00:03:55.922641 2491 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:03:55.923213 kubelet[2491]: I0906 00:03:55.923183 2491 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:03:55.924077 kubelet[2491]: I0906 00:03:55.923934 2491 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:03:55.927340 kubelet[2491]: I0906 00:03:55.927217 2491 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:03:55.927895 kubelet[2491]: I0906 00:03:55.927853 2491 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:03:55.928205 kubelet[2491]: I0906 00:03:55.928184 2491 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:03:55.929213 kubelet[2491]: W0906 00:03:55.929122 2491 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.77:6443: connect: connection refused Sep 6 00:03:55.929508 kubelet[2491]: E0906 00:03:55.929468 2491 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.77:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:55.929909 kubelet[2491]: I0906 00:03:55.929880 2491 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:03:55.930163 kubelet[2491]: I0906 00:03:55.930134 2491 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:03:55.930839 kubelet[2491]: E0906 00:03:55.930744 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-77\" not found" Sep 6 00:03:55.933550 kubelet[2491]: E0906 00:03:55.933495 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-77?timeout=10s\": dial tcp 172.31.29.77:6443: connect: connection refused" interval="200ms" Sep 6 00:03:55.934830 kubelet[2491]: I0906 00:03:55.934797 2491 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:03:55.989707 kubelet[2491]: I0906 00:03:55.989653 2491 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:03:55.989707 kubelet[2491]: I0906 00:03:55.989694 2491 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:03:55.989924 kubelet[2491]: I0906 00:03:55.989724 2491 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:03:55.994973 kubelet[2491]: I0906 00:03:55.994886 2491 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:03:55.995222 kubelet[2491]: I0906 00:03:55.995184 2491 policy_none.go:49] "None policy: Start" Sep 6 00:03:55.997729 kubelet[2491]: I0906 00:03:55.997678 2491 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:03:55.997729 kubelet[2491]: I0906 00:03:55.997725 2491 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:03:55.997955 kubelet[2491]: I0906 00:03:55.997761 2491 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:03:55.997955 kubelet[2491]: E0906 00:03:55.997835 2491 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:03:56.004595 kubelet[2491]: I0906 00:03:56.004494 2491 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:03:56.004595 kubelet[2491]: I0906 00:03:56.004549 2491 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:03:56.007068 kubelet[2491]: W0906 00:03:56.006987 2491 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.77:6443: connect: connection refused Sep 6 00:03:56.007068 kubelet[2491]: E0906 00:03:56.007064 2491 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.77:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:56.018060 kubelet[2491]: I0906 00:03:56.015444 2491 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:03:56.018060 kubelet[2491]: I0906 00:03:56.015698 2491 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:03:56.018060 kubelet[2491]: I0906 00:03:56.015722 2491 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:03:56.019715 kubelet[2491]: I0906 00:03:56.019648 2491 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:03:56.024790 kubelet[2491]: E0906 00:03:56.024725 2491 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-77\" not found" Sep 6 00:03:56.128751 kubelet[2491]: I0906 00:03:56.128702 2491 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-77" Sep 6 00:03:56.130606 kubelet[2491]: I0906 00:03:56.130404 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b80f30342ade878dd4f6857f3e2cb483-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-77\" (UID: \"b80f30342ade878dd4f6857f3e2cb483\") " pod="kube-system/kube-controller-manager-ip-172-31-29-77" Sep 6 00:03:56.130916 kubelet[2491]: I0906 00:03:56.130858 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40850bc0389292dc999b101d1dc91c70-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-77\" (UID: \"40850bc0389292dc999b101d1dc91c70\") " pod="kube-system/kube-scheduler-ip-172-31-29-77" Sep 6 00:03:56.131167 kubelet[2491]: I0906 00:03:56.131115 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b80f30342ade878dd4f6857f3e2cb483-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-77\" (UID: \"b80f30342ade878dd4f6857f3e2cb483\") " pod="kube-system/kube-controller-manager-ip-172-31-29-77" Sep 6 00:03:56.131514 kubelet[2491]: I0906 00:03:56.131456 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b80f30342ade878dd4f6857f3e2cb483-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-77\" (UID: \"b80f30342ade878dd4f6857f3e2cb483\") " pod="kube-system/kube-controller-manager-ip-172-31-29-77" Sep 6 00:03:56.131723 kubelet[2491]: I0906 00:03:56.131672 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26691ee45869cf2f54b68efd1e67cd96-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-77\" (UID: \"26691ee45869cf2f54b68efd1e67cd96\") " pod="kube-system/kube-apiserver-ip-172-31-29-77" Sep 6 00:03:56.131947 kubelet[2491]: I0906 00:03:56.131901 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b80f30342ade878dd4f6857f3e2cb483-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-77\" (UID: \"b80f30342ade878dd4f6857f3e2cb483\") " pod="kube-system/kube-controller-manager-ip-172-31-29-77" Sep 6 00:03:56.132139 kubelet[2491]: E0906 00:03:56.132068 2491 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.77:6443/api/v1/nodes\": dial tcp 172.31.29.77:6443: connect: connection refused" node="ip-172-31-29-77" Sep 6 00:03:56.132139 kubelet[2491]: I0906 00:03:56.132091 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b80f30342ade878dd4f6857f3e2cb483-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-77\" (UID: \"b80f30342ade878dd4f6857f3e2cb483\") " pod="kube-system/kube-controller-manager-ip-172-31-29-77" Sep 6 00:03:56.132424 kubelet[2491]: I0906 00:03:56.132158 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26691ee45869cf2f54b68efd1e67cd96-ca-certs\") pod \"kube-apiserver-ip-172-31-29-77\" (UID: \"26691ee45869cf2f54b68efd1e67cd96\") " pod="kube-system/kube-apiserver-ip-172-31-29-77" Sep 6 00:03:56.132424 kubelet[2491]: I0906 00:03:56.132204 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26691ee45869cf2f54b68efd1e67cd96-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-77\" (UID: \"26691ee45869cf2f54b68efd1e67cd96\") " pod="kube-system/kube-apiserver-ip-172-31-29-77" Sep 6 00:03:56.135070 kubelet[2491]: E0906 00:03:56.135004 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-77?timeout=10s\": dial tcp 172.31.29.77:6443: connect: connection refused" interval="400ms" Sep 6 00:03:56.337048 kubelet[2491]: I0906 00:03:56.335660 2491 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-77" Sep 6 00:03:56.337048 kubelet[2491]: E0906 00:03:56.336514 2491 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.77:6443/api/v1/nodes\": dial tcp 172.31.29.77:6443: connect: connection refused" node="ip-172-31-29-77" Sep 6 00:03:56.414871 env[1935]: time="2025-09-06T00:03:56.414772832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-77,Uid:b80f30342ade878dd4f6857f3e2cb483,Namespace:kube-system,Attempt:0,}" Sep 6 00:03:56.416705 env[1935]: time="2025-09-06T00:03:56.416631716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-77,Uid:26691ee45869cf2f54b68efd1e67cd96,Namespace:kube-system,Attempt:0,}" Sep 6 00:03:56.422081 env[1935]: time="2025-09-06T00:03:56.421714090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-77,Uid:40850bc0389292dc999b101d1dc91c70,Namespace:kube-system,Attempt:0,}" Sep 6 00:03:56.536650 kubelet[2491]: E0906 00:03:56.536552 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-77?timeout=10s\": dial tcp 172.31.29.77:6443: connect: connection refused" interval="800ms" Sep 6 00:03:56.740045 kubelet[2491]: I0906 00:03:56.739396 2491 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-77" Sep 6 00:03:56.740045 kubelet[2491]: E0906 00:03:56.739972 2491 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.77:6443/api/v1/nodes\": dial tcp 172.31.29.77:6443: connect: connection refused" node="ip-172-31-29-77" Sep 6 00:03:56.964528 kubelet[2491]: W0906 00:03:56.964351 2491 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.77:6443: connect: connection refused Sep 6 00:03:56.964528 kubelet[2491]: E0906 00:03:56.964456 2491 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.77:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:56.990953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1217849908.mount: Deactivated successfully. Sep 6 00:03:57.013916 env[1935]: time="2025-09-06T00:03:57.013797540Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.022987 env[1935]: time="2025-09-06T00:03:57.022904867Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.025484 env[1935]: time="2025-09-06T00:03:57.025417550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.028725 env[1935]: time="2025-09-06T00:03:57.028655324Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.033461 env[1935]: time="2025-09-06T00:03:57.033373024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.042064 env[1935]: time="2025-09-06T00:03:57.041990932Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.044274 env[1935]: time="2025-09-06T00:03:57.044165892Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.046903 env[1935]: time="2025-09-06T00:03:57.046815580Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.051377 env[1935]: time="2025-09-06T00:03:57.051291823Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.056116 env[1935]: time="2025-09-06T00:03:57.056022102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.058279 env[1935]: time="2025-09-06T00:03:57.058188744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.066009 env[1935]: time="2025-09-06T00:03:57.065948001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.149304 env[1935]: time="2025-09-06T00:03:57.149136451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:03:57.149527 env[1935]: time="2025-09-06T00:03:57.149228095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:03:57.149527 env[1935]: time="2025-09-06T00:03:57.149303007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:03:57.149961 env[1935]: time="2025-09-06T00:03:57.149852047Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb76bcfa78d094fbbf90587d926e7a828a044518605ad21f843dbfe3435ab94d pid=2543 runtime=io.containerd.runc.v2 Sep 6 00:03:57.151072 kubelet[2491]: W0906 00:03:57.150964 2491 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-77&limit=500&resourceVersion=0": dial tcp 172.31.29.77:6443: connect: connection refused Sep 6 00:03:57.151323 kubelet[2491]: E0906 00:03:57.151075 2491 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-77&limit=500&resourceVersion=0\": dial tcp 172.31.29.77:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:57.151637 env[1935]: time="2025-09-06T00:03:57.151442738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:03:57.151872 env[1935]: time="2025-09-06T00:03:57.151806292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:03:57.152103 env[1935]: time="2025-09-06T00:03:57.152026983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:03:57.155316 env[1935]: time="2025-09-06T00:03:57.155083305Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cbb22cc835c6c99112a9096741b54ceefb7da45b628da321943af2f28edbd3c pid=2547 runtime=io.containerd.runc.v2 Sep 6 00:03:57.156313 env[1935]: time="2025-09-06T00:03:57.156126565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:03:57.156501 env[1935]: time="2025-09-06T00:03:57.156308366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:03:57.156501 env[1935]: time="2025-09-06T00:03:57.156379737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:03:57.157028 env[1935]: time="2025-09-06T00:03:57.156886813Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5aecf4bd091638628084e29627cbade9d39311dcd8522100154e801619135f97 pid=2546 runtime=io.containerd.runc.v2 Sep 6 00:03:57.339156 kubelet[2491]: E0906 00:03:57.337701 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-77?timeout=10s\": dial tcp 172.31.29.77:6443: connect: connection refused" interval="1.6s" Sep 6 00:03:57.351534 env[1935]: time="2025-09-06T00:03:57.351468435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-77,Uid:b80f30342ade878dd4f6857f3e2cb483,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb76bcfa78d094fbbf90587d926e7a828a044518605ad21f843dbfe3435ab94d\"" Sep 6 00:03:57.358400 env[1935]: time="2025-09-06T00:03:57.358326777Z" level=info msg="CreateContainer within sandbox \"cb76bcfa78d094fbbf90587d926e7a828a044518605ad21f843dbfe3435ab94d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:03:57.372812 env[1935]: time="2025-09-06T00:03:57.371474966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-77,Uid:40850bc0389292dc999b101d1dc91c70,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aecf4bd091638628084e29627cbade9d39311dcd8522100154e801619135f97\"" Sep 6 00:03:57.372970 kubelet[2491]: W0906 00:03:57.371497 2491 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.77:6443: connect: connection refused Sep 6 00:03:57.372970 kubelet[2491]: E0906 00:03:57.371631 2491 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.77:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:57.376895 env[1935]: time="2025-09-06T00:03:57.376818790Z" level=info msg="CreateContainer within sandbox \"5aecf4bd091638628084e29627cbade9d39311dcd8522100154e801619135f97\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:03:57.389724 env[1935]: time="2025-09-06T00:03:57.389606162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-77,Uid:26691ee45869cf2f54b68efd1e67cd96,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cbb22cc835c6c99112a9096741b54ceefb7da45b628da321943af2f28edbd3c\"" Sep 6 00:03:57.394685 env[1935]: time="2025-09-06T00:03:57.394614340Z" level=info msg="CreateContainer within sandbox \"1cbb22cc835c6c99112a9096741b54ceefb7da45b628da321943af2f28edbd3c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:03:57.402298 env[1935]: time="2025-09-06T00:03:57.402201147Z" level=info msg="CreateContainer within sandbox \"cb76bcfa78d094fbbf90587d926e7a828a044518605ad21f843dbfe3435ab94d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6aface63cf9cedcd2b9180d7edb92ad5f20cf7fdd02389c289f02539564ad1e1\"" Sep 6 00:03:57.406275 env[1935]: time="2025-09-06T00:03:57.406192927Z" level=info msg="StartContainer for \"6aface63cf9cedcd2b9180d7edb92ad5f20cf7fdd02389c289f02539564ad1e1\"" Sep 6 00:03:57.416793 env[1935]: time="2025-09-06T00:03:57.416705988Z" level=info msg="CreateContainer within sandbox \"5aecf4bd091638628084e29627cbade9d39311dcd8522100154e801619135f97\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0d4a752771f8e6620bcb58636e23a36526972ae7eac2c4537de19d08a3e25cb2\"" Sep 6 00:03:57.417913 env[1935]: time="2025-09-06T00:03:57.417861791Z" level=info msg="StartContainer for \"0d4a752771f8e6620bcb58636e23a36526972ae7eac2c4537de19d08a3e25cb2\"" Sep 6 00:03:57.444000 env[1935]: time="2025-09-06T00:03:57.443915128Z" level=info msg="CreateContainer within sandbox \"1cbb22cc835c6c99112a9096741b54ceefb7da45b628da321943af2f28edbd3c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ce070892b8da677c5c9b7d4170dcbe5e3a91430c2ae5bb785586fef3baa5edb\"" Sep 6 00:03:57.444863 env[1935]: time="2025-09-06T00:03:57.444786070Z" level=info msg="StartContainer for \"1ce070892b8da677c5c9b7d4170dcbe5e3a91430c2ae5bb785586fef3baa5edb\"" Sep 6 00:03:57.511049 kubelet[2491]: W0906 00:03:57.510954 2491 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.77:6443: connect: connection refused Sep 6 00:03:57.511049 kubelet[2491]: E0906 00:03:57.511063 2491 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.77:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:57.545222 kubelet[2491]: I0906 00:03:57.545166 2491 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-77" Sep 6 00:03:57.545838 kubelet[2491]: E0906 00:03:57.545772 2491 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.77:6443/api/v1/nodes\": dial tcp 172.31.29.77:6443: connect: connection refused" node="ip-172-31-29-77" Sep 6 00:03:57.675566 env[1935]: time="2025-09-06T00:03:57.674391098Z" level=info msg="StartContainer for \"6aface63cf9cedcd2b9180d7edb92ad5f20cf7fdd02389c289f02539564ad1e1\" returns successfully" Sep 6 00:03:57.688517 env[1935]: time="2025-09-06T00:03:57.688442770Z" level=info msg="StartContainer for \"1ce070892b8da677c5c9b7d4170dcbe5e3a91430c2ae5bb785586fef3baa5edb\" returns successfully" Sep 6 00:03:57.690903 env[1935]: time="2025-09-06T00:03:57.690762453Z" level=info msg="StartContainer for \"0d4a752771f8e6620bcb58636e23a36526972ae7eac2c4537de19d08a3e25cb2\" returns successfully" Sep 6 00:03:58.907380 amazon-ssm-agent[1904]: 2025-09-06 00:03:58 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 6 00:03:59.148722 kubelet[2491]: I0906 00:03:59.148655 2491 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-77" Sep 6 00:04:01.469948 kubelet[2491]: I0906 00:04:01.469878 2491 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-29-77" Sep 6 00:04:01.564530 kubelet[2491]: E0906 00:04:01.564374 2491 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-77.186288a270836e40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-77,UID:ip-172-31-29-77,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-77,},FirstTimestamp:2025-09-06 00:03:55.881655872 +0000 UTC m=+1.758830615,LastTimestamp:2025-09-06 00:03:55.881655872 +0000 UTC m=+1.758830615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-77,}" Sep 6 00:04:01.648719 kubelet[2491]: E0906 00:04:01.648638 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Sep 6 00:04:01.859019 kubelet[2491]: I0906 00:04:01.858903 2491 apiserver.go:52] "Watching apiserver" Sep 6 00:04:01.929077 kubelet[2491]: I0906 00:04:01.929025 2491 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:04:04.145370 systemd[1]: Reloading. Sep 6 00:04:04.408990 /usr/lib/systemd/system-generators/torcx-generator[2783]: time="2025-09-06T00:04:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:04:04.417396 /usr/lib/systemd/system-generators/torcx-generator[2783]: time="2025-09-06T00:04:04Z" level=info msg="torcx already run" Sep 6 00:04:04.421372 update_engine[1923]: I0906 00:04:04.421307 1923 update_attempter.cc:509] Updating boot flags... Sep 6 00:04:04.883369 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:04:04.883409 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:04:04.971352 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:04:05.553806 systemd[1]: Stopping kubelet.service... Sep 6 00:04:05.593134 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:04:05.593858 systemd[1]: Stopped kubelet.service. Sep 6 00:04:05.598484 systemd[1]: Starting kubelet.service... Sep 6 00:04:06.096495 systemd[1]: Started kubelet.service. Sep 6 00:04:06.236805 kubelet[3036]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:04:06.237546 kubelet[3036]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:04:06.237708 kubelet[3036]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:04:06.237998 kubelet[3036]: I0906 00:04:06.237935 3036 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:04:06.252866 kubelet[3036]: I0906 00:04:06.252815 3036 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:04:06.253071 kubelet[3036]: I0906 00:04:06.253045 3036 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:04:06.253756 kubelet[3036]: I0906 00:04:06.253710 3036 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:04:06.256550 kubelet[3036]: I0906 00:04:06.256510 3036 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:04:06.260824 kubelet[3036]: I0906 00:04:06.260771 3036 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:04:06.261653 sudo[3051]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:04:06.262257 sudo[3051]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:04:06.277858 kubelet[3036]: E0906 00:04:06.277796 3036 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:04:06.278098 kubelet[3036]: I0906 00:04:06.278071 3036 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:04:06.291027 kubelet[3036]: I0906 00:04:06.290963 3036 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:04:06.292107 kubelet[3036]: I0906 00:04:06.292076 3036 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:04:06.292640 kubelet[3036]: I0906 00:04:06.292586 3036 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:04:06.293034 kubelet[3036]: I0906 00:04:06.292759 3036 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-77","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:04:06.293320 kubelet[3036]: I0906 00:04:06.293294 3036 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:04:06.293462 kubelet[3036]: I0906 00:04:06.293441 3036 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:04:06.293669 kubelet[3036]: I0906 00:04:06.293646 3036 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:04:06.293976 kubelet[3036]: I0906 00:04:06.293957 3036 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:04:06.307344 kubelet[3036]: I0906 00:04:06.307304 3036 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:04:06.307599 kubelet[3036]: I0906 00:04:06.307576 3036 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:04:06.307742 kubelet[3036]: I0906 00:04:06.307720 3036 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:04:06.317792 kubelet[3036]: I0906 00:04:06.317754 3036 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:04:06.318760 kubelet[3036]: I0906 00:04:06.318730 3036 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:04:06.319745 kubelet[3036]: I0906 00:04:06.319707 3036 server.go:1274] "Started kubelet" Sep 6 00:04:06.327879 kubelet[3036]: I0906 00:04:06.327843 3036 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:04:06.338635 kubelet[3036]: I0906 00:04:06.338599 3036 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:04:06.339393 kubelet[3036]: I0906 00:04:06.330417 3036 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:04:06.341152 kubelet[3036]: I0906 00:04:06.341115 3036 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:04:06.343329 kubelet[3036]: I0906 00:04:06.343224 3036 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:04:06.348798 kubelet[3036]: I0906 00:04:06.330480 3036 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:04:06.349423 kubelet[3036]: I0906 00:04:06.349387 3036 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:04:06.349936 kubelet[3036]: I0906 00:04:06.349910 3036 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:04:06.353291 kubelet[3036]: I0906 00:04:06.330894 3036 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:04:06.354287 kubelet[3036]: I0906 00:04:06.354211 3036 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:04:06.354667 kubelet[3036]: I0906 00:04:06.354626 3036 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:04:06.369997 kubelet[3036]: E0906 00:04:06.369929 3036 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:04:06.376913 kubelet[3036]: I0906 00:04:06.376875 3036 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:04:06.439994 kubelet[3036]: I0906 00:04:06.439938 3036 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:04:06.442219 kubelet[3036]: I0906 00:04:06.442178 3036 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:04:06.442509 kubelet[3036]: I0906 00:04:06.442485 3036 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:04:06.442635 kubelet[3036]: I0906 00:04:06.442615 3036 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:04:06.442848 kubelet[3036]: E0906 00:04:06.442817 3036 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:04:06.544302 kubelet[3036]: E0906 00:04:06.542966 3036 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:04:06.630401 kubelet[3036]: I0906 00:04:06.630186 3036 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:04:06.630590 kubelet[3036]: I0906 00:04:06.630560 3036 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:04:06.630709 kubelet[3036]: I0906 00:04:06.630689 3036 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:04:06.631051 kubelet[3036]: I0906 00:04:06.631024 3036 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:04:06.631524 kubelet[3036]: I0906 00:04:06.631284 3036 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:04:06.631701 kubelet[3036]: I0906 00:04:06.631680 3036 policy_none.go:49] "None policy: Start" Sep 6 00:04:06.633915 kubelet[3036]: I0906 00:04:06.633881 3036 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:04:06.634324 kubelet[3036]: I0906 00:04:06.634303 3036 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:04:06.635011 kubelet[3036]: I0906 00:04:06.634986 3036 state_mem.go:75] "Updated machine memory state" Sep 6 00:04:06.650809 kubelet[3036]: I0906 00:04:06.650687 3036 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:04:06.652559 kubelet[3036]: I0906 00:04:06.652518 3036 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:04:06.654028 kubelet[3036]: I0906 00:04:06.653324 3036 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:04:06.658535 kubelet[3036]: I0906 00:04:06.658499 3036 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:04:06.755437 kubelet[3036]: E0906 00:04:06.755366 3036 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-29-77\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-77" Sep 6 00:04:06.755892 kubelet[3036]: E0906 00:04:06.755849 3036 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-29-77\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-77" Sep 6 00:04:06.756136 kubelet[3036]: E0906 00:04:06.756108 3036 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-77\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-77" Sep 6 00:04:06.771059 kubelet[3036]: I0906 00:04:06.771010 3036 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-77" Sep 6 00:04:06.784091 kubelet[3036]: I0906 00:04:06.784028 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b80f30342ade878dd4f6857f3e2cb483-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-77\" (UID: \"b80f30342ade878dd4f6857f3e2cb483\") " pod="kube-system/kube-controller-manager-ip-172-31-29-77" Sep 6 00:04:06.784307 kubelet[3036]: I0906 00:04:06.784095 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b80f30342ade878dd4f6857f3e2cb483-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-77\" (UID: \"b80f30342ade878dd4f6857f3e2cb483\") " pod="kube-system/kube-controller-manager-ip-172-31-29-77" Sep 6 00:04:06.784307 kubelet[3036]: I0906 00:04:06.784160 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b80f30342ade878dd4f6857f3e2cb483-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-77\" (UID: \"b80f30342ade878dd4f6857f3e2cb483\") " pod="kube-system/kube-controller-manager-ip-172-31-29-77" Sep 6 00:04:06.784307 kubelet[3036]: I0906 00:04:06.784204 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40850bc0389292dc999b101d1dc91c70-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-77\" (UID: \"40850bc0389292dc999b101d1dc91c70\") " pod="kube-system/kube-scheduler-ip-172-31-29-77" Sep 6 00:04:06.784307 kubelet[3036]: I0906 00:04:06.784280 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26691ee45869cf2f54b68efd1e67cd96-ca-certs\") pod \"kube-apiserver-ip-172-31-29-77\" (UID: \"26691ee45869cf2f54b68efd1e67cd96\") " pod="kube-system/kube-apiserver-ip-172-31-29-77" Sep 6 00:04:06.784630 kubelet[3036]: I0906 00:04:06.784318 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26691ee45869cf2f54b68efd1e67cd96-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-77\" (UID: \"26691ee45869cf2f54b68efd1e67cd96\") " pod="kube-system/kube-apiserver-ip-172-31-29-77" Sep 6 00:04:06.784630 kubelet[3036]: I0906 00:04:06.784354 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b80f30342ade878dd4f6857f3e2cb483-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-77\" (UID: \"b80f30342ade878dd4f6857f3e2cb483\") " pod="kube-system/kube-controller-manager-ip-172-31-29-77" Sep 6 00:04:06.784630 kubelet[3036]: I0906 00:04:06.784426 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26691ee45869cf2f54b68efd1e67cd96-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-77\" (UID: \"26691ee45869cf2f54b68efd1e67cd96\") " pod="kube-system/kube-apiserver-ip-172-31-29-77" Sep 6 00:04:06.784630 kubelet[3036]: I0906 00:04:06.784466 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b80f30342ade878dd4f6857f3e2cb483-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-77\" (UID: \"b80f30342ade878dd4f6857f3e2cb483\") " pod="kube-system/kube-controller-manager-ip-172-31-29-77" Sep 6 00:04:06.787670 kubelet[3036]: I0906 00:04:06.787609 3036 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-29-77" Sep 6 00:04:06.788006 kubelet[3036]: I0906 00:04:06.787984 3036 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-29-77" Sep 6 00:04:07.318185 kubelet[3036]: I0906 00:04:07.318116 3036 apiserver.go:52] "Watching apiserver" Sep 6 00:04:07.348206 kubelet[3036]: I0906 00:04:07.348164 3036 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:04:07.371642 sudo[3051]: pam_unix(sudo:session): session closed for user root Sep 6 00:04:07.418152 kubelet[3036]: I0906 00:04:07.418033 3036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-77" podStartSLOduration=3.418009306 podStartE2EDuration="3.418009306s" podCreationTimestamp="2025-09-06 00:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:07.38788967 +0000 UTC m=+1.273399948" watchObservedRunningTime="2025-09-06 00:04:07.418009306 +0000 UTC m=+1.303519584" Sep 6 00:04:07.449826 kubelet[3036]: I0906 00:04:07.449656 3036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-77" podStartSLOduration=2.449626578 podStartE2EDuration="2.449626578s" podCreationTimestamp="2025-09-06 00:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:07.418542299 +0000 UTC m=+1.304052625" watchObservedRunningTime="2025-09-06 00:04:07.449626578 +0000 UTC m=+1.335136952" Sep 6 00:04:07.495760 kubelet[3036]: I0906 00:04:07.495678 3036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-77" podStartSLOduration=3.495656344 podStartE2EDuration="3.495656344s" podCreationTimestamp="2025-09-06 00:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:07.450934561 +0000 UTC m=+1.336444947" watchObservedRunningTime="2025-09-06 00:04:07.495656344 +0000 UTC m=+1.381166622" Sep 6 00:04:08.696590 kubelet[3036]: I0906 00:04:08.696547 3036 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:04:08.698295 env[1935]: time="2025-09-06T00:04:08.698212904Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:04:08.699390 kubelet[3036]: I0906 00:04:08.699351 3036 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:04:09.704659 kubelet[3036]: I0906 00:04:09.704591 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bd88f66-a8a6-41a1-b0b5-26fdd65b2997-lib-modules\") pod \"kube-proxy-44lsr\" (UID: \"2bd88f66-a8a6-41a1-b0b5-26fdd65b2997\") " pod="kube-system/kube-proxy-44lsr" Sep 6 00:04:09.705380 kubelet[3036]: I0906 00:04:09.704672 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2bd88f66-a8a6-41a1-b0b5-26fdd65b2997-kube-proxy\") pod \"kube-proxy-44lsr\" (UID: \"2bd88f66-a8a6-41a1-b0b5-26fdd65b2997\") " pod="kube-system/kube-proxy-44lsr" Sep 6 00:04:09.705380 kubelet[3036]: I0906 00:04:09.704718 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bd88f66-a8a6-41a1-b0b5-26fdd65b2997-xtables-lock\") pod \"kube-proxy-44lsr\" (UID: \"2bd88f66-a8a6-41a1-b0b5-26fdd65b2997\") " pod="kube-system/kube-proxy-44lsr" Sep 6 00:04:09.705380 kubelet[3036]: I0906 00:04:09.704760 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2hz2\" (UniqueName: \"kubernetes.io/projected/2bd88f66-a8a6-41a1-b0b5-26fdd65b2997-kube-api-access-s2hz2\") pod \"kube-proxy-44lsr\" (UID: \"2bd88f66-a8a6-41a1-b0b5-26fdd65b2997\") " pod="kube-system/kube-proxy-44lsr" Sep 6 00:04:09.836947 kubelet[3036]: I0906 00:04:09.836303 3036 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:04:09.987323 env[1935]: time="2025-09-06T00:04:09.987151665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44lsr,Uid:2bd88f66-a8a6-41a1-b0b5-26fdd65b2997,Namespace:kube-system,Attempt:0,}" Sep 6 00:04:10.037809 env[1935]: time="2025-09-06T00:04:10.037645849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:10.037996 env[1935]: time="2025-09-06T00:04:10.037893035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:10.038145 env[1935]: time="2025-09-06T00:04:10.038065043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:10.038883 env[1935]: time="2025-09-06T00:04:10.038751695Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0367641231a64f18ea91846dab5c86714ed34edffef8dfd724e94e9ec96d6ece pid=3086 runtime=io.containerd.runc.v2 Sep 6 00:04:10.126681 systemd[1]: run-containerd-runc-k8s.io-0367641231a64f18ea91846dab5c86714ed34edffef8dfd724e94e9ec96d6ece-runc.XIo6p3.mount: Deactivated successfully. Sep 6 00:04:10.312880 env[1935]: time="2025-09-06T00:04:10.312714241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44lsr,Uid:2bd88f66-a8a6-41a1-b0b5-26fdd65b2997,Namespace:kube-system,Attempt:0,} returns sandbox id \"0367641231a64f18ea91846dab5c86714ed34edffef8dfd724e94e9ec96d6ece\"" Sep 6 00:04:10.320948 env[1935]: time="2025-09-06T00:04:10.320876919Z" level=info msg="CreateContainer within sandbox \"0367641231a64f18ea91846dab5c86714ed34edffef8dfd724e94e9ec96d6ece\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:04:10.331177 kubelet[3036]: I0906 00:04:10.331138 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-etc-cni-netd\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.331500 kubelet[3036]: I0906 00:04:10.331465 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-xtables-lock\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.331670 kubelet[3036]: I0906 00:04:10.331639 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20d139c8-5e49-46e6-9c28-0e47463dc97b-hubble-tls\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.331830 kubelet[3036]: I0906 00:04:10.331800 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-bpf-maps\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.331969 kubelet[3036]: I0906 00:04:10.331943 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-cni-path\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.332126 kubelet[3036]: I0906 00:04:10.332099 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20d139c8-5e49-46e6-9c28-0e47463dc97b-cilium-config-path\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.332322 kubelet[3036]: I0906 00:04:10.332284 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9j6l\" (UniqueName: \"kubernetes.io/projected/20d139c8-5e49-46e6-9c28-0e47463dc97b-kube-api-access-h9j6l\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.332513 kubelet[3036]: I0906 00:04:10.332482 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkzpj\" (UniqueName: \"kubernetes.io/projected/cab44746-4338-4010-9e59-dc8c9532c501-kube-api-access-wkzpj\") pod \"cilium-operator-5d85765b45-p4mtx\" (UID: \"cab44746-4338-4010-9e59-dc8c9532c501\") " pod="kube-system/cilium-operator-5d85765b45-p4mtx" Sep 6 00:04:10.332667 kubelet[3036]: I0906 00:04:10.332639 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-hostproc\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.332809 kubelet[3036]: I0906 00:04:10.332782 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-lib-modules\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.333147 kubelet[3036]: I0906 00:04:10.333065 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20d139c8-5e49-46e6-9c28-0e47463dc97b-clustermesh-secrets\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.333480 kubelet[3036]: I0906 00:04:10.333418 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-cilium-cgroup\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.333721 kubelet[3036]: I0906 00:04:10.333688 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-host-proc-sys-kernel\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.333958 kubelet[3036]: I0906 00:04:10.333900 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-cilium-run\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.334212 kubelet[3036]: I0906 00:04:10.334157 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-host-proc-sys-net\") pod \"cilium-5526p\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " pod="kube-system/cilium-5526p" Sep 6 00:04:10.334562 kubelet[3036]: I0906 00:04:10.334499 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cab44746-4338-4010-9e59-dc8c9532c501-cilium-config-path\") pod \"cilium-operator-5d85765b45-p4mtx\" (UID: \"cab44746-4338-4010-9e59-dc8c9532c501\") " pod="kube-system/cilium-operator-5d85765b45-p4mtx" Sep 6 00:04:10.359987 env[1935]: time="2025-09-06T00:04:10.359894460Z" level=info msg="CreateContainer within sandbox \"0367641231a64f18ea91846dab5c86714ed34edffef8dfd724e94e9ec96d6ece\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fc6727ac7b733eacf81cb93a94b1a46219980a31a057214b997c2970b8f81a03\"" Sep 6 00:04:10.361775 env[1935]: time="2025-09-06T00:04:10.361714047Z" level=info msg="StartContainer for \"fc6727ac7b733eacf81cb93a94b1a46219980a31a057214b997c2970b8f81a03\"" Sep 6 00:04:10.530328 env[1935]: time="2025-09-06T00:04:10.530254520Z" level=info msg="StartContainer for \"fc6727ac7b733eacf81cb93a94b1a46219980a31a057214b997c2970b8f81a03\" returns successfully" Sep 6 00:04:10.546635 env[1935]: time="2025-09-06T00:04:10.546583199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5526p,Uid:20d139c8-5e49-46e6-9c28-0e47463dc97b,Namespace:kube-system,Attempt:0,}" Sep 6 00:04:10.577733 env[1935]: time="2025-09-06T00:04:10.576151567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:10.578089 env[1935]: time="2025-09-06T00:04:10.577997353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:10.578430 env[1935]: time="2025-09-06T00:04:10.578346458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:10.579074 env[1935]: time="2025-09-06T00:04:10.579000825Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1 pid=3164 runtime=io.containerd.runc.v2 Sep 6 00:04:10.607200 kubelet[3036]: I0906 00:04:10.607079 3036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-44lsr" podStartSLOduration=1.607056729 podStartE2EDuration="1.607056729s" podCreationTimestamp="2025-09-06 00:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:10.606653905 +0000 UTC m=+4.492164219" watchObservedRunningTime="2025-09-06 00:04:10.607056729 +0000 UTC m=+4.492567007" Sep 6 00:04:10.608379 env[1935]: time="2025-09-06T00:04:10.608317626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-p4mtx,Uid:cab44746-4338-4010-9e59-dc8c9532c501,Namespace:kube-system,Attempt:0,}" Sep 6 00:04:10.699455 env[1935]: time="2025-09-06T00:04:10.699305148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:10.699455 env[1935]: time="2025-09-06T00:04:10.699393888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:10.699708 env[1935]: time="2025-09-06T00:04:10.699421360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:10.702655 env[1935]: time="2025-09-06T00:04:10.702531967Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9 pid=3191 runtime=io.containerd.runc.v2 Sep 6 00:04:10.817450 env[1935]: time="2025-09-06T00:04:10.817385571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5526p,Uid:20d139c8-5e49-46e6-9c28-0e47463dc97b,Namespace:kube-system,Attempt:0,} returns sandbox id \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\"" Sep 6 00:04:10.821282 env[1935]: time="2025-09-06T00:04:10.821198384Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:04:10.915415 env[1935]: time="2025-09-06T00:04:10.915160594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-p4mtx,Uid:cab44746-4338-4010-9e59-dc8c9532c501,Namespace:kube-system,Attempt:0,} returns sandbox id \"7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9\"" Sep 6 00:04:17.818463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764207822.mount: Deactivated successfully. Sep 6 00:04:22.083188 env[1935]: time="2025-09-06T00:04:22.083051032Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:22.087601 env[1935]: time="2025-09-06T00:04:22.087536039Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:22.092722 env[1935]: time="2025-09-06T00:04:22.092647895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:22.093443 env[1935]: time="2025-09-06T00:04:22.093373155Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 6 00:04:22.099144 env[1935]: time="2025-09-06T00:04:22.098010766Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:04:22.100042 env[1935]: time="2025-09-06T00:04:22.099966132Z" level=info msg="CreateContainer within sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:04:22.129693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910865172.mount: Deactivated successfully. Sep 6 00:04:22.146320 env[1935]: time="2025-09-06T00:04:22.146177485Z" level=info msg="CreateContainer within sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f\"" Sep 6 00:04:22.149493 env[1935]: time="2025-09-06T00:04:22.148005256Z" level=info msg="StartContainer for \"b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f\"" Sep 6 00:04:22.286465 env[1935]: time="2025-09-06T00:04:22.286392776Z" level=info msg="StartContainer for \"b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f\" returns successfully" Sep 6 00:04:22.522904 env[1935]: time="2025-09-06T00:04:22.522833579Z" level=info msg="shim disconnected" id=b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f Sep 6 00:04:22.523567 env[1935]: time="2025-09-06T00:04:22.523502337Z" level=warning msg="cleaning up after shim disconnected" id=b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f namespace=k8s.io Sep 6 00:04:22.523813 env[1935]: time="2025-09-06T00:04:22.523771664Z" level=info msg="cleaning up dead shim" Sep 6 00:04:22.552576 env[1935]: time="2025-09-06T00:04:22.552480130Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3421 runtime=io.containerd.runc.v2\n" Sep 6 00:04:22.618060 env[1935]: time="2025-09-06T00:04:22.617975504Z" level=info msg="CreateContainer within sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:04:22.640076 env[1935]: time="2025-09-06T00:04:22.639939179Z" level=info msg="CreateContainer within sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37\"" Sep 6 00:04:22.643163 env[1935]: time="2025-09-06T00:04:22.642070156Z" level=info msg="StartContainer for \"5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37\"" Sep 6 00:04:22.761307 env[1935]: time="2025-09-06T00:04:22.758179010Z" level=info msg="StartContainer for \"5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37\" returns successfully" Sep 6 00:04:22.786162 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:04:22.788618 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:04:22.788896 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:04:22.800707 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:04:22.815292 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:04:22.851122 env[1935]: time="2025-09-06T00:04:22.851061679Z" level=info msg="shim disconnected" id=5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37 Sep 6 00:04:22.851640 env[1935]: time="2025-09-06T00:04:22.851608531Z" level=warning msg="cleaning up after shim disconnected" id=5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37 namespace=k8s.io Sep 6 00:04:22.851840 env[1935]: time="2025-09-06T00:04:22.851769345Z" level=info msg="cleaning up dead shim" Sep 6 00:04:22.865941 env[1935]: time="2025-09-06T00:04:22.865886473Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3486 runtime=io.containerd.runc.v2\n" Sep 6 00:04:23.124269 systemd[1]: run-containerd-runc-k8s.io-b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f-runc.Ewfusz.mount: Deactivated successfully. Sep 6 00:04:23.125078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f-rootfs.mount: Deactivated successfully. Sep 6 00:04:23.620264 env[1935]: time="2025-09-06T00:04:23.614048277Z" level=info msg="CreateContainer within sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:04:23.669979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4180553951.mount: Deactivated successfully. Sep 6 00:04:23.721770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1622008917.mount: Deactivated successfully. Sep 6 00:04:23.740804 env[1935]: time="2025-09-06T00:04:23.740741790Z" level=info msg="CreateContainer within sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0\"" Sep 6 00:04:23.744651 env[1935]: time="2025-09-06T00:04:23.744596418Z" level=info msg="StartContainer for \"b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0\"" Sep 6 00:04:23.893279 env[1935]: time="2025-09-06T00:04:23.893086250Z" level=info msg="StartContainer for \"b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0\" returns successfully" Sep 6 00:04:23.961163 env[1935]: time="2025-09-06T00:04:23.961080709Z" level=info msg="shim disconnected" id=b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0 Sep 6 00:04:23.961163 env[1935]: time="2025-09-06T00:04:23.961151551Z" level=warning msg="cleaning up after shim disconnected" id=b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0 namespace=k8s.io Sep 6 00:04:23.961568 env[1935]: time="2025-09-06T00:04:23.961174605Z" level=info msg="cleaning up dead shim" Sep 6 00:04:23.976979 env[1935]: time="2025-09-06T00:04:23.976824019Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3544 runtime=io.containerd.runc.v2\n" Sep 6 00:04:24.622687 env[1935]: time="2025-09-06T00:04:24.622618598Z" level=info msg="CreateContainer within sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:04:24.664402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159784506.mount: Deactivated successfully. Sep 6 00:04:24.688441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31848461.mount: Deactivated successfully. Sep 6 00:04:24.698861 env[1935]: time="2025-09-06T00:04:24.698752385Z" level=info msg="CreateContainer within sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c\"" Sep 6 00:04:24.701307 env[1935]: time="2025-09-06T00:04:24.700214356Z" level=info msg="StartContainer for \"f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c\"" Sep 6 00:04:24.828704 env[1935]: time="2025-09-06T00:04:24.828611492Z" level=info msg="StartContainer for \"f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c\" returns successfully" Sep 6 00:04:24.917745 env[1935]: time="2025-09-06T00:04:24.917581669Z" level=info msg="shim disconnected" id=f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c Sep 6 00:04:24.917745 env[1935]: time="2025-09-06T00:04:24.917654251Z" level=warning msg="cleaning up after shim disconnected" id=f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c namespace=k8s.io Sep 6 00:04:24.917745 env[1935]: time="2025-09-06T00:04:24.917677809Z" level=info msg="cleaning up dead shim" Sep 6 00:04:24.940962 env[1935]: time="2025-09-06T00:04:24.940896930Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3601 runtime=io.containerd.runc.v2\n" Sep 6 00:04:25.250969 env[1935]: time="2025-09-06T00:04:25.250870434Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:25.255200 env[1935]: time="2025-09-06T00:04:25.255109049Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:25.259067 env[1935]: time="2025-09-06T00:04:25.258987131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:25.260839 env[1935]: time="2025-09-06T00:04:25.260751475Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 6 00:04:25.268622 env[1935]: time="2025-09-06T00:04:25.268534210Z" level=info msg="CreateContainer within sandbox \"7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:04:25.290504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134399596.mount: Deactivated successfully. Sep 6 00:04:25.301128 env[1935]: time="2025-09-06T00:04:25.301057717Z" level=info msg="CreateContainer within sandbox \"7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\"" Sep 6 00:04:25.304044 env[1935]: time="2025-09-06T00:04:25.302848570Z" level=info msg="StartContainer for \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\"" Sep 6 00:04:25.432640 env[1935]: time="2025-09-06T00:04:25.432567676Z" level=info msg="StartContainer for \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\" returns successfully" Sep 6 00:04:25.647690 env[1935]: time="2025-09-06T00:04:25.644216850Z" level=info msg="CreateContainer within sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:04:25.654571 kubelet[3036]: I0906 00:04:25.654471 3036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-p4mtx" podStartSLOduration=1.310479932 podStartE2EDuration="15.654445906s" podCreationTimestamp="2025-09-06 00:04:10 +0000 UTC" firstStartedPulling="2025-09-06 00:04:10.918938855 +0000 UTC m=+4.804449133" lastFinishedPulling="2025-09-06 00:04:25.262904829 +0000 UTC m=+19.148415107" observedRunningTime="2025-09-06 00:04:25.65398753 +0000 UTC m=+19.539497808" watchObservedRunningTime="2025-09-06 00:04:25.654445906 +0000 UTC m=+19.539956196" Sep 6 00:04:25.723590 env[1935]: time="2025-09-06T00:04:25.723414662Z" level=info msg="CreateContainer within sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\"" Sep 6 00:04:25.725668 env[1935]: time="2025-09-06T00:04:25.725582093Z" level=info msg="StartContainer for \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\"" Sep 6 00:04:25.986709 env[1935]: time="2025-09-06T00:04:25.986618579Z" level=info msg="StartContainer for \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\" returns successfully" Sep 6 00:04:26.374457 kubelet[3036]: I0906 00:04:26.373978 3036 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:04:26.452723 kubelet[3036]: W0906 00:04:26.452643 3036 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-29-77" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-77' and this object Sep 6 00:04:26.452943 kubelet[3036]: E0906 00:04:26.452736 3036 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-29-77\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-77' and this object" logger="UnhandledError" Sep 6 00:04:26.617304 kubelet[3036]: I0906 00:04:26.617216 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9beb044-26d0-40aa-8bf9-2d2cbb90055b-config-volume\") pod \"coredns-7c65d6cfc9-ll28j\" (UID: \"d9beb044-26d0-40aa-8bf9-2d2cbb90055b\") " pod="kube-system/coredns-7c65d6cfc9-ll28j" Sep 6 00:04:26.617714 kubelet[3036]: I0906 00:04:26.617673 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnv2p\" (UniqueName: \"kubernetes.io/projected/d9beb044-26d0-40aa-8bf9-2d2cbb90055b-kube-api-access-xnv2p\") pod \"coredns-7c65d6cfc9-ll28j\" (UID: \"d9beb044-26d0-40aa-8bf9-2d2cbb90055b\") " pod="kube-system/coredns-7c65d6cfc9-ll28j" Sep 6 00:04:26.617963 kubelet[3036]: I0906 00:04:26.617930 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv6d2\" (UniqueName: \"kubernetes.io/projected/3cf1f677-f2ec-44c1-a2e0-0637cad7beb6-kube-api-access-wv6d2\") pod \"coredns-7c65d6cfc9-kzl77\" (UID: \"3cf1f677-f2ec-44c1-a2e0-0637cad7beb6\") " pod="kube-system/coredns-7c65d6cfc9-kzl77" Sep 6 00:04:26.618176 kubelet[3036]: I0906 00:04:26.618132 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cf1f677-f2ec-44c1-a2e0-0637cad7beb6-config-volume\") pod \"coredns-7c65d6cfc9-kzl77\" (UID: \"3cf1f677-f2ec-44c1-a2e0-0637cad7beb6\") " pod="kube-system/coredns-7c65d6cfc9-kzl77" Sep 6 00:04:26.733301 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 00:04:27.720184 kubelet[3036]: E0906 00:04:27.720117 3036 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 6 00:04:27.720830 kubelet[3036]: E0906 00:04:27.720271 3036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9beb044-26d0-40aa-8bf9-2d2cbb90055b-config-volume podName:d9beb044-26d0-40aa-8bf9-2d2cbb90055b nodeName:}" failed. No retries permitted until 2025-09-06 00:04:28.220221404 +0000 UTC m=+22.105731682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d9beb044-26d0-40aa-8bf9-2d2cbb90055b-config-volume") pod "coredns-7c65d6cfc9-ll28j" (UID: "d9beb044-26d0-40aa-8bf9-2d2cbb90055b") : failed to sync configmap cache: timed out waiting for the condition Sep 6 00:04:27.720830 kubelet[3036]: E0906 00:04:27.720578 3036 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 6 00:04:27.720830 kubelet[3036]: E0906 00:04:27.720640 3036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3cf1f677-f2ec-44c1-a2e0-0637cad7beb6-config-volume podName:3cf1f677-f2ec-44c1-a2e0-0637cad7beb6 nodeName:}" failed. No retries permitted until 2025-09-06 00:04:28.220621382 +0000 UTC m=+22.106131660 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3cf1f677-f2ec-44c1-a2e0-0637cad7beb6-config-volume") pod "coredns-7c65d6cfc9-kzl77" (UID: "3cf1f677-f2ec-44c1-a2e0-0637cad7beb6") : failed to sync configmap cache: timed out waiting for the condition Sep 6 00:04:28.056298 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 00:04:28.241193 env[1935]: time="2025-09-06T00:04:28.241091644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kzl77,Uid:3cf1f677-f2ec-44c1-a2e0-0637cad7beb6,Namespace:kube-system,Attempt:0,}" Sep 6 00:04:28.277725 env[1935]: time="2025-09-06T00:04:28.277635156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ll28j,Uid:d9beb044-26d0-40aa-8bf9-2d2cbb90055b,Namespace:kube-system,Attempt:0,}" Sep 6 00:04:30.783086 systemd-networkd[1599]: cilium_host: Link UP Sep 6 00:04:30.783422 systemd-networkd[1599]: cilium_net: Link UP Sep 6 00:04:30.783430 systemd-networkd[1599]: cilium_net: Gained carrier Sep 6 00:04:30.783745 systemd-networkd[1599]: cilium_host: Gained carrier Sep 6 00:04:30.786288 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:04:30.786853 (udev-worker)[3820]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:04:30.788258 (udev-worker)[3821]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:04:30.790523 systemd-networkd[1599]: cilium_host: Gained IPv6LL Sep 6 00:04:30.990776 (udev-worker)[3736]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:04:31.000694 systemd-networkd[1599]: cilium_vxlan: Link UP Sep 6 00:04:31.000715 systemd-networkd[1599]: cilium_vxlan: Gained carrier Sep 6 00:04:31.181491 systemd-networkd[1599]: cilium_net: Gained IPv6LL Sep 6 00:04:31.610298 kernel: NET: Registered PF_ALG protocol family Sep 6 00:04:32.110496 systemd-networkd[1599]: cilium_vxlan: Gained IPv6LL Sep 6 00:04:32.486545 systemd[1]: run-containerd-runc-k8s.io-47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30-runc.1GmDLO.mount: Deactivated successfully. Sep 6 00:04:33.129460 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:04:33.122668 systemd-networkd[1599]: lxc_health: Link UP Sep 6 00:04:33.128380 systemd-networkd[1599]: lxc_health: Gained carrier Sep 6 00:04:33.393375 systemd-networkd[1599]: lxc18bc2e7fbd72: Link UP Sep 6 00:04:33.409295 kernel: eth0: renamed from tmpc8403 Sep 6 00:04:33.422452 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc18bc2e7fbd72: link becomes ready Sep 6 00:04:33.422852 systemd-networkd[1599]: lxc18bc2e7fbd72: Gained carrier Sep 6 00:04:33.444318 systemd-networkd[1599]: lxc51ebc3daffbd: Link UP Sep 6 00:04:33.465326 kernel: eth0: renamed from tmp8e93b Sep 6 00:04:33.475744 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc51ebc3daffbd: link becomes ready Sep 6 00:04:33.474746 systemd-networkd[1599]: lxc51ebc3daffbd: Gained carrier Sep 6 00:04:34.583539 kubelet[3036]: I0906 00:04:34.583419 3036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5526p" podStartSLOduration=13.306876755 podStartE2EDuration="24.58338156s" podCreationTimestamp="2025-09-06 00:04:10 +0000 UTC" firstStartedPulling="2025-09-06 00:04:10.819871023 +0000 UTC m=+4.705381325" lastFinishedPulling="2025-09-06 00:04:22.09637584 +0000 UTC m=+15.981886130" observedRunningTime="2025-09-06 00:04:27.003997989 +0000 UTC m=+20.889508279" watchObservedRunningTime="2025-09-06 00:04:34.58338156 +0000 UTC m=+28.468891838" Sep 6 00:04:34.607212 systemd-networkd[1599]: lxc_health: Gained IPv6LL Sep 6 00:04:34.775503 systemd[1]: run-containerd-runc-k8s.io-47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30-runc.C7Sha9.mount: Deactivated successfully. Sep 6 00:04:35.246000 systemd-networkd[1599]: lxc51ebc3daffbd: Gained IPv6LL Sep 6 00:04:35.437981 systemd-networkd[1599]: lxc18bc2e7fbd72: Gained IPv6LL Sep 6 00:04:37.105027 systemd[1]: run-containerd-runc-k8s.io-47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30-runc.hH8GV6.mount: Deactivated successfully. Sep 6 00:04:41.640179 systemd[1]: run-containerd-runc-k8s.io-47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30-runc.OsqiHQ.mount: Deactivated successfully. Sep 6 00:04:41.738644 env[1935]: time="2025-09-06T00:04:41.738550428Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:04:41.752272 env[1935]: time="2025-09-06T00:04:41.752172246Z" level=info msg="StopContainer for \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\" with timeout 2 (s)" Sep 6 00:04:41.752919 env[1935]: time="2025-09-06T00:04:41.752855984Z" level=info msg="Stop container \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\" with signal terminated" Sep 6 00:04:41.788654 systemd-networkd[1599]: lxc_health: Link DOWN Sep 6 00:04:41.788667 systemd-networkd[1599]: lxc_health: Lost carrier Sep 6 00:04:42.678117 env[1935]: time="2025-09-06T00:04:42.677948281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:42.678117 env[1935]: time="2025-09-06T00:04:42.678035369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:42.678570 env[1935]: time="2025-09-06T00:04:42.678092397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:42.679061 env[1935]: time="2025-09-06T00:04:42.678960584Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e93ba0b77d1e39fd797b375dfd17621c24fecc7efc36e17bc36a89b21c01ea8 pid=4345 runtime=io.containerd.runc.v2 Sep 6 00:04:42.679701 env[1935]: time="2025-09-06T00:04:42.679572065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:42.679834 env[1935]: time="2025-09-06T00:04:42.679761471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:42.679899 env[1935]: time="2025-09-06T00:04:42.679856348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:42.680384 env[1935]: time="2025-09-06T00:04:42.680201187Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c84035fa536a41ded8c6fce53ddeed7aefde6baef9c928b1271a3bad875b9c3e pid=4333 runtime=io.containerd.runc.v2 Sep 6 00:04:42.766346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30-rootfs.mount: Deactivated successfully. Sep 6 00:04:42.833635 env[1935]: time="2025-09-06T00:04:42.833446137Z" level=info msg="shim disconnected" id=47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30 Sep 6 00:04:42.834456 env[1935]: time="2025-09-06T00:04:42.834350122Z" level=warning msg="cleaning up after shim disconnected" id=47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30 namespace=k8s.io Sep 6 00:04:42.834456 env[1935]: time="2025-09-06T00:04:42.834400909Z" level=info msg="cleaning up dead shim" Sep 6 00:04:42.854278 env[1935]: time="2025-09-06T00:04:42.851353196Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4408 runtime=io.containerd.runc.v2\n" Sep 6 00:04:42.856764 env[1935]: time="2025-09-06T00:04:42.856689886Z" level=info msg="StopContainer for \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\" returns successfully" Sep 6 00:04:42.857719 env[1935]: time="2025-09-06T00:04:42.857667327Z" level=info msg="StopPodSandbox for \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\"" Sep 6 00:04:42.858168 env[1935]: time="2025-09-06T00:04:42.858124444Z" level=info msg="Container to stop \"b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:04:42.865808 env[1935]: time="2025-09-06T00:04:42.865732100Z" level=info msg="Container to stop \"5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:04:42.866005 env[1935]: time="2025-09-06T00:04:42.865968069Z" level=info msg="Container to stop \"b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:04:42.866169 env[1935]: time="2025-09-06T00:04:42.866136186Z" level=info msg="Container to stop \"f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:04:42.866364 env[1935]: time="2025-09-06T00:04:42.866327536Z" level=info msg="Container to stop \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:04:42.870890 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1-shm.mount: Deactivated successfully. Sep 6 00:04:42.911558 env[1935]: time="2025-09-06T00:04:42.911502858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kzl77,Uid:3cf1f677-f2ec-44c1-a2e0-0637cad7beb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c84035fa536a41ded8c6fce53ddeed7aefde6baef9c928b1271a3bad875b9c3e\"" Sep 6 00:04:42.925975 env[1935]: time="2025-09-06T00:04:42.925918400Z" level=info msg="CreateContainer within sandbox \"c84035fa536a41ded8c6fce53ddeed7aefde6baef9c928b1271a3bad875b9c3e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:04:42.936407 env[1935]: time="2025-09-06T00:04:42.935195107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ll28j,Uid:d9beb044-26d0-40aa-8bf9-2d2cbb90055b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e93ba0b77d1e39fd797b375dfd17621c24fecc7efc36e17bc36a89b21c01ea8\"" Sep 6 00:04:42.945615 env[1935]: time="2025-09-06T00:04:42.945541480Z" level=info msg="CreateContainer within sandbox \"8e93ba0b77d1e39fd797b375dfd17621c24fecc7efc36e17bc36a89b21c01ea8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:04:42.968258 env[1935]: time="2025-09-06T00:04:42.968162311Z" level=info msg="CreateContainer within sandbox \"c84035fa536a41ded8c6fce53ddeed7aefde6baef9c928b1271a3bad875b9c3e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11ff5df674d9a8bebef9d192f8d9d5ae7f43d6c713dd83a28f0ddc3830b181fb\"" Sep 6 00:04:42.971432 env[1935]: time="2025-09-06T00:04:42.969370872Z" level=info msg="StartContainer for \"11ff5df674d9a8bebef9d192f8d9d5ae7f43d6c713dd83a28f0ddc3830b181fb\"" Sep 6 00:04:42.979401 env[1935]: time="2025-09-06T00:04:42.979223275Z" level=info msg="shim disconnected" id=33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1 Sep 6 00:04:42.980459 env[1935]: time="2025-09-06T00:04:42.980389222Z" level=warning msg="cleaning up after shim disconnected" id=33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1 namespace=k8s.io Sep 6 00:04:42.980459 env[1935]: time="2025-09-06T00:04:42.980450401Z" level=info msg="cleaning up dead shim" Sep 6 00:04:43.004312 env[1935]: time="2025-09-06T00:04:43.004213175Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4464 runtime=io.containerd.runc.v2\n" Sep 6 00:04:43.004945 env[1935]: time="2025-09-06T00:04:43.004890204Z" level=info msg="TearDown network for sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" successfully" Sep 6 00:04:43.005071 env[1935]: time="2025-09-06T00:04:43.004941698Z" level=info msg="StopPodSandbox for \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" returns successfully" Sep 6 00:04:43.056139 env[1935]: time="2025-09-06T00:04:43.056052070Z" level=info msg="CreateContainer within sandbox \"8e93ba0b77d1e39fd797b375dfd17621c24fecc7efc36e17bc36a89b21c01ea8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b4b13ca810c4e09c6779c4a6909a0cf21edff5019fe99136cb56ee8f88e9c09\"" Sep 6 00:04:43.060305 env[1935]: time="2025-09-06T00:04:43.060197752Z" level=info msg="StartContainer for \"0b4b13ca810c4e09c6779c4a6909a0cf21edff5019fe99136cb56ee8f88e9c09\"" Sep 6 00:04:43.112108 kubelet[3036]: E0906 00:04:43.111584 3036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20d139c8-5e49-46e6-9c28-0e47463dc97b" containerName="mount-cgroup" Sep 6 00:04:43.112108 kubelet[3036]: E0906 00:04:43.111635 3036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20d139c8-5e49-46e6-9c28-0e47463dc97b" containerName="cilium-agent" Sep 6 00:04:43.112108 kubelet[3036]: E0906 00:04:43.111678 3036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20d139c8-5e49-46e6-9c28-0e47463dc97b" containerName="apply-sysctl-overwrites" Sep 6 00:04:43.112108 kubelet[3036]: E0906 00:04:43.111695 3036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20d139c8-5e49-46e6-9c28-0e47463dc97b" containerName="mount-bpf-fs" Sep 6 00:04:43.112108 kubelet[3036]: E0906 00:04:43.111712 3036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20d139c8-5e49-46e6-9c28-0e47463dc97b" containerName="clean-cilium-state" Sep 6 00:04:43.112108 kubelet[3036]: I0906 00:04:43.111783 3036 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d139c8-5e49-46e6-9c28-0e47463dc97b" containerName="cilium-agent" Sep 6 00:04:43.162567 kubelet[3036]: I0906 00:04:43.162522 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-bpf-maps\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.170006 kubelet[3036]: I0906 00:04:43.169664 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-hostproc\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.170559 kubelet[3036]: I0906 00:04:43.170412 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-host-proc-sys-net\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.170934 kubelet[3036]: I0906 00:04:43.170793 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-cilium-cgroup\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.171639 kubelet[3036]: I0906 00:04:43.171411 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-lib-modules\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.173021 kubelet[3036]: I0906 00:04:43.172981 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9j6l\" (UniqueName: \"kubernetes.io/projected/20d139c8-5e49-46e6-9c28-0e47463dc97b-kube-api-access-h9j6l\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.180805 kubelet[3036]: I0906 00:04:43.180753 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-cilium-run\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.181400 kubelet[3036]: I0906 00:04:43.181024 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-cni-path\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.194851 kubelet[3036]: I0906 00:04:43.193615 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-host-proc-sys-kernel\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.195880 kubelet[3036]: I0906 00:04:43.195369 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-xtables-lock\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.205630 env[1935]: time="2025-09-06T00:04:43.205565170Z" level=info msg="StartContainer for \"11ff5df674d9a8bebef9d192f8d9d5ae7f43d6c713dd83a28f0ddc3830b181fb\" returns successfully" Sep 6 00:04:43.226188 kubelet[3036]: I0906 00:04:43.226125 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20d139c8-5e49-46e6-9c28-0e47463dc97b-hubble-tls\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.239604 kubelet[3036]: I0906 00:04:43.239560 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20d139c8-5e49-46e6-9c28-0e47463dc97b-clustermesh-secrets\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.239860 kubelet[3036]: I0906 00:04:43.239831 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-etc-cni-netd\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.240019 kubelet[3036]: I0906 00:04:43.239991 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20d139c8-5e49-46e6-9c28-0e47463dc97b-cilium-config-path\") pod \"20d139c8-5e49-46e6-9c28-0e47463dc97b\" (UID: \"20d139c8-5e49-46e6-9c28-0e47463dc97b\") " Sep 6 00:04:43.240224 kubelet[3036]: I0906 00:04:43.240194 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-hubble-tls\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.240455 kubelet[3036]: I0906 00:04:43.240413 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-etc-cni-netd\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.240669 kubelet[3036]: I0906 00:04:43.240629 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-host-proc-sys-net\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.240857 kubelet[3036]: I0906 00:04:43.240817 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-clustermesh-secrets\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.241019 kubelet[3036]: I0906 00:04:43.240993 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-host-proc-sys-kernel\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.241216 kubelet[3036]: I0906 00:04:43.241157 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cilium-config-path\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.241422 kubelet[3036]: I0906 00:04:43.241391 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m4h6\" (UniqueName: \"kubernetes.io/projected/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-kube-api-access-7m4h6\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.241591 kubelet[3036]: I0906 00:04:43.241551 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cilium-run\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.241742 kubelet[3036]: I0906 00:04:43.241717 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-hostproc\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.241899 kubelet[3036]: I0906 00:04:43.241870 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cilium-cgroup\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.242046 kubelet[3036]: I0906 00:04:43.242019 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-lib-modules\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.242262 kubelet[3036]: I0906 00:04:43.242210 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-bpf-maps\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.242405 kubelet[3036]: I0906 00:04:43.242379 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cni-path\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.242572 kubelet[3036]: I0906 00:04:43.242546 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-xtables-lock\") pod \"cilium-mtk55\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " pod="kube-system/cilium-mtk55" Sep 6 00:04:43.242856 kubelet[3036]: I0906 00:04:43.171220 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:04:43.243022 kubelet[3036]: I0906 00:04:43.172185 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:04:43.243142 kubelet[3036]: I0906 00:04:43.172277 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:04:43.243281 kubelet[3036]: I0906 00:04:43.182037 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-cni-path" (OuterVolumeSpecName: "cni-path") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:04:43.243403 kubelet[3036]: I0906 00:04:43.183025 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:04:43.243513 kubelet[3036]: I0906 00:04:43.185662 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:04:43.243624 kubelet[3036]: I0906 00:04:43.196226 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:04:43.243744 kubelet[3036]: I0906 00:04:43.205955 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-hostproc" (OuterVolumeSpecName: "hostproc") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:04:43.244793 kubelet[3036]: I0906 00:04:43.244745 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:04:43.254498 kubelet[3036]: I0906 00:04:43.254401 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20d139c8-5e49-46e6-9c28-0e47463dc97b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:04:43.269361 kubelet[3036]: I0906 00:04:43.269309 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:04:43.269771 kubelet[3036]: I0906 00:04:43.269737 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d139c8-5e49-46e6-9c28-0e47463dc97b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:04:43.276384 kubelet[3036]: I0906 00:04:43.276309 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d139c8-5e49-46e6-9c28-0e47463dc97b-kube-api-access-h9j6l" (OuterVolumeSpecName: "kube-api-access-h9j6l") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "kube-api-access-h9j6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:04:43.287519 kubelet[3036]: I0906 00:04:43.287440 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d139c8-5e49-46e6-9c28-0e47463dc97b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "20d139c8-5e49-46e6-9c28-0e47463dc97b" (UID: "20d139c8-5e49-46e6-9c28-0e47463dc97b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:04:43.309516 env[1935]: time="2025-09-06T00:04:43.309439638Z" level=info msg="StartContainer for \"0b4b13ca810c4e09c6779c4a6909a0cf21edff5019fe99136cb56ee8f88e9c09\" returns successfully" Sep 6 00:04:43.344139 kubelet[3036]: I0906 00:04:43.344098 3036 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-cni-path\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.345439 kubelet[3036]: I0906 00:04:43.345399 3036 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-host-proc-sys-kernel\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.345693 kubelet[3036]: I0906 00:04:43.345668 3036 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-xtables-lock\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.345828 kubelet[3036]: I0906 00:04:43.345806 3036 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20d139c8-5e49-46e6-9c28-0e47463dc97b-hubble-tls\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.345954 kubelet[3036]: I0906 00:04:43.345931 3036 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20d139c8-5e49-46e6-9c28-0e47463dc97b-clustermesh-secrets\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.346079 kubelet[3036]: I0906 00:04:43.346057 3036 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20d139c8-5e49-46e6-9c28-0e47463dc97b-cilium-config-path\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.346195 kubelet[3036]: I0906 00:04:43.346173 3036 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-etc-cni-netd\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.346376 kubelet[3036]: I0906 00:04:43.346353 3036 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-bpf-maps\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.346500 kubelet[3036]: I0906 00:04:43.346479 3036 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-hostproc\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.346619 kubelet[3036]: I0906 00:04:43.346597 3036 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-host-proc-sys-net\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.346742 kubelet[3036]: I0906 00:04:43.346720 3036 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-lib-modules\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.346871 kubelet[3036]: I0906 00:04:43.346849 3036 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-cilium-cgroup\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.346999 kubelet[3036]: I0906 00:04:43.346977 3036 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20d139c8-5e49-46e6-9c28-0e47463dc97b-cilium-run\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.347122 kubelet[3036]: I0906 00:04:43.347101 3036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9j6l\" (UniqueName: \"kubernetes.io/projected/20d139c8-5e49-46e6-9c28-0e47463dc97b-kube-api-access-h9j6l\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:04:43.425822 env[1935]: time="2025-09-06T00:04:43.425289655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mtk55,Uid:5cd06277-8ae6-43bd-b8e2-d1b7109fb58f,Namespace:kube-system,Attempt:0,}" Sep 6 00:04:43.463006 env[1935]: time="2025-09-06T00:04:43.460455855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:43.463286 env[1935]: time="2025-09-06T00:04:43.462304481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:43.463286 env[1935]: time="2025-09-06T00:04:43.462337327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:43.463660 env[1935]: time="2025-09-06T00:04:43.463495929Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6 pid=4555 runtime=io.containerd.runc.v2 Sep 6 00:04:43.550895 env[1935]: time="2025-09-06T00:04:43.550842802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mtk55,Uid:5cd06277-8ae6-43bd-b8e2-d1b7109fb58f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\"" Sep 6 00:04:43.555626 env[1935]: time="2025-09-06T00:04:43.555561502Z" level=info msg="CreateContainer within sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:04:43.580778 env[1935]: time="2025-09-06T00:04:43.580715171Z" level=info msg="CreateContainer within sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c\"" Sep 6 00:04:43.582121 env[1935]: time="2025-09-06T00:04:43.582069083Z" level=info msg="StartContainer for \"4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c\"" Sep 6 00:04:43.718752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1-rootfs.mount: Deactivated successfully. Sep 6 00:04:43.719040 systemd[1]: var-lib-kubelet-pods-20d139c8\x2d5e49\x2d46e6\x2d9c28\x2d0e47463dc97b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh9j6l.mount: Deactivated successfully. Sep 6 00:04:43.719284 systemd[1]: var-lib-kubelet-pods-20d139c8\x2d5e49\x2d46e6\x2d9c28\x2d0e47463dc97b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:04:43.721197 env[1935]: time="2025-09-06T00:04:43.720935394Z" level=info msg="StartContainer for \"4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c\" returns successfully" Sep 6 00:04:43.719630 systemd[1]: var-lib-kubelet-pods-20d139c8\x2d5e49\x2d46e6\x2d9c28\x2d0e47463dc97b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:04:43.742952 kubelet[3036]: I0906 00:04:43.742786 3036 scope.go:117] "RemoveContainer" containerID="47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30" Sep 6 00:04:43.754423 env[1935]: time="2025-09-06T00:04:43.754371005Z" level=info msg="RemoveContainer for \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\"" Sep 6 00:04:43.771451 env[1935]: time="2025-09-06T00:04:43.771398103Z" level=info msg="RemoveContainer for \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\" returns successfully" Sep 6 00:04:43.772807 kubelet[3036]: I0906 00:04:43.772775 3036 scope.go:117] "RemoveContainer" containerID="f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c" Sep 6 00:04:43.802854 env[1935]: time="2025-09-06T00:04:43.802768704Z" level=info msg="RemoveContainer for \"f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c\"" Sep 6 00:04:43.829807 env[1935]: time="2025-09-06T00:04:43.829735825Z" level=info msg="RemoveContainer for \"f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c\" returns successfully" Sep 6 00:04:43.832298 kubelet[3036]: I0906 00:04:43.830434 3036 scope.go:117] "RemoveContainer" containerID="b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0" Sep 6 00:04:43.839830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c-rootfs.mount: Deactivated successfully. Sep 6 00:04:43.847705 env[1935]: time="2025-09-06T00:04:43.847652927Z" level=info msg="RemoveContainer for \"b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0\"" Sep 6 00:04:43.855354 kubelet[3036]: I0906 00:04:43.854763 3036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kzl77" podStartSLOduration=34.854741178 podStartE2EDuration="34.854741178s" podCreationTimestamp="2025-09-06 00:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:43.854307691 +0000 UTC m=+37.739818041" watchObservedRunningTime="2025-09-06 00:04:43.854741178 +0000 UTC m=+37.740251468" Sep 6 00:04:43.855913 kubelet[3036]: I0906 00:04:43.855304 3036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ll28j" podStartSLOduration=34.855287927 podStartE2EDuration="34.855287927s" podCreationTimestamp="2025-09-06 00:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:43.817427403 +0000 UTC m=+37.702937669" watchObservedRunningTime="2025-09-06 00:04:43.855287927 +0000 UTC m=+37.740798205" Sep 6 00:04:43.871135 env[1935]: time="2025-09-06T00:04:43.871072035Z" level=info msg="shim disconnected" id=4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c Sep 6 00:04:43.871639 env[1935]: time="2025-09-06T00:04:43.871601131Z" level=warning msg="cleaning up after shim disconnected" id=4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c namespace=k8s.io Sep 6 00:04:43.871808 env[1935]: time="2025-09-06T00:04:43.871779689Z" level=info msg="cleaning up dead shim" Sep 6 00:04:43.876881 env[1935]: time="2025-09-06T00:04:43.876818206Z" level=info msg="RemoveContainer for \"b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0\" returns successfully" Sep 6 00:04:43.878148 kubelet[3036]: I0906 00:04:43.877894 3036 scope.go:117] "RemoveContainer" containerID="5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37" Sep 6 00:04:43.888678 env[1935]: time="2025-09-06T00:04:43.888622217Z" level=info msg="RemoveContainer for \"5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37\"" Sep 6 00:04:43.904436 env[1935]: time="2025-09-06T00:04:43.904373239Z" level=info msg="RemoveContainer for \"5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37\" returns successfully" Sep 6 00:04:43.905215 kubelet[3036]: I0906 00:04:43.905029 3036 scope.go:117] "RemoveContainer" containerID="b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f" Sep 6 00:04:43.908712 env[1935]: time="2025-09-06T00:04:43.908649259Z" level=info msg="RemoveContainer for \"b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f\"" Sep 6 00:04:43.910621 env[1935]: time="2025-09-06T00:04:43.910552993Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4639 runtime=io.containerd.runc.v2\n" Sep 6 00:04:43.918842 env[1935]: time="2025-09-06T00:04:43.918782217Z" level=info msg="RemoveContainer for \"b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f\" returns successfully" Sep 6 00:04:43.919668 kubelet[3036]: I0906 00:04:43.919456 3036 scope.go:117] "RemoveContainer" containerID="47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30" Sep 6 00:04:43.920194 env[1935]: time="2025-09-06T00:04:43.920084203Z" level=error msg="ContainerStatus for \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\": not found" Sep 6 00:04:43.921105 kubelet[3036]: E0906 00:04:43.920710 3036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\": not found" containerID="47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30" Sep 6 00:04:43.921105 kubelet[3036]: I0906 00:04:43.920767 3036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30"} err="failed to get container status \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\": rpc error: code = NotFound desc = an error occurred when try to find container \"47eb87d4137af4d2b89e67d549ec10d041c7422e5b143143e92c693ae98f2c30\": not found" Sep 6 00:04:43.921105 kubelet[3036]: I0906 00:04:43.920926 3036 scope.go:117] "RemoveContainer" containerID="f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c" Sep 6 00:04:43.921766 env[1935]: time="2025-09-06T00:04:43.921680564Z" level=error msg="ContainerStatus for \"f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c\": not found" Sep 6 00:04:43.922512 kubelet[3036]: E0906 00:04:43.922132 3036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c\": not found" containerID="f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c" Sep 6 00:04:43.922512 kubelet[3036]: I0906 00:04:43.922217 3036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c"} err="failed to get container status \"f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f47b9cc56abb3a654bd2d9f64bc70e03937494872c945d18b7c5c372ee72546c\": not found" Sep 6 00:04:43.922512 kubelet[3036]: I0906 00:04:43.922313 3036 scope.go:117] "RemoveContainer" containerID="b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0" Sep 6 00:04:43.923114 env[1935]: time="2025-09-06T00:04:43.923004427Z" level=error msg="ContainerStatus for \"b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0\": not found" Sep 6 00:04:43.923781 kubelet[3036]: E0906 00:04:43.923498 3036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0\": not found" containerID="b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0" Sep 6 00:04:43.923781 kubelet[3036]: I0906 00:04:43.923568 3036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0"} err="failed to get container status \"b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"b72f47879a64ea89708a8b6db2b4e3a6156d8e7815187edc6cc0169a92a372c0\": not found" Sep 6 00:04:43.923781 kubelet[3036]: I0906 00:04:43.923607 3036 scope.go:117] "RemoveContainer" containerID="5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37" Sep 6 00:04:43.924455 env[1935]: time="2025-09-06T00:04:43.924359239Z" level=error msg="ContainerStatus for \"5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37\": not found" Sep 6 00:04:43.925203 kubelet[3036]: E0906 00:04:43.924866 3036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37\": not found" containerID="5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37" Sep 6 00:04:43.925203 kubelet[3036]: I0906 00:04:43.924969 3036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37"} err="failed to get container status \"5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b9a27ae8f2e98f5a68d87b1f5f79da5af5ced9cf8d47b655c9707c621529a37\": not found" Sep 6 00:04:43.925203 kubelet[3036]: I0906 00:04:43.925030 3036 scope.go:117] "RemoveContainer" containerID="b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f" Sep 6 00:04:43.926076 env[1935]: time="2025-09-06T00:04:43.925925863Z" level=error msg="ContainerStatus for \"b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f\": not found" Sep 6 00:04:43.926691 kubelet[3036]: E0906 00:04:43.926530 3036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f\": not found" containerID="b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f" Sep 6 00:04:43.926691 kubelet[3036]: I0906 00:04:43.926606 3036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f"} err="failed to get container status \"b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b34cea54e7f2a32157552b8f362fd5972a3a06a008c10ca597bf7611341eae2f\": not found" Sep 6 00:04:44.448056 kubelet[3036]: I0906 00:04:44.447942 3036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d139c8-5e49-46e6-9c28-0e47463dc97b" path="/var/lib/kubelet/pods/20d139c8-5e49-46e6-9c28-0e47463dc97b/volumes" Sep 6 00:04:44.818698 env[1935]: time="2025-09-06T00:04:44.817033404Z" level=info msg="CreateContainer within sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:04:44.851370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157478377.mount: Deactivated successfully. Sep 6 00:04:44.870388 env[1935]: time="2025-09-06T00:04:44.870303385Z" level=info msg="CreateContainer within sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200\"" Sep 6 00:04:44.871705 env[1935]: time="2025-09-06T00:04:44.871561387Z" level=info msg="StartContainer for \"87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200\"" Sep 6 00:04:45.001771 env[1935]: time="2025-09-06T00:04:45.000341161Z" level=info msg="StartContainer for \"87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200\" returns successfully" Sep 6 00:04:45.048178 env[1935]: time="2025-09-06T00:04:45.048061044Z" level=info msg="shim disconnected" id=87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200 Sep 6 00:04:45.048476 env[1935]: time="2025-09-06T00:04:45.048166218Z" level=warning msg="cleaning up after shim disconnected" id=87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200 namespace=k8s.io Sep 6 00:04:45.048476 env[1935]: time="2025-09-06T00:04:45.048203864Z" level=info msg="cleaning up dead shim" Sep 6 00:04:45.063418 env[1935]: time="2025-09-06T00:04:45.063361937Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4704 runtime=io.containerd.runc.v2\n" Sep 6 00:04:45.833855 env[1935]: time="2025-09-06T00:04:45.833775305Z" level=info msg="CreateContainer within sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:04:45.840351 systemd[1]: run-containerd-runc-k8s.io-87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200-runc.ksrK4c.mount: Deactivated successfully. Sep 6 00:04:45.840784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200-rootfs.mount: Deactivated successfully. Sep 6 00:04:45.909598 env[1935]: time="2025-09-06T00:04:45.909501442Z" level=info msg="CreateContainer within sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5\"" Sep 6 00:04:45.911391 env[1935]: time="2025-09-06T00:04:45.911074676Z" level=info msg="StartContainer for \"e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5\"" Sep 6 00:04:46.039287 env[1935]: time="2025-09-06T00:04:46.039166978Z" level=info msg="StartContainer for \"e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5\" returns successfully" Sep 6 00:04:46.102383 env[1935]: time="2025-09-06T00:04:46.102013860Z" level=info msg="shim disconnected" id=e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5 Sep 6 00:04:46.102383 env[1935]: time="2025-09-06T00:04:46.102084279Z" level=warning msg="cleaning up after shim disconnected" id=e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5 namespace=k8s.io Sep 6 00:04:46.102383 env[1935]: time="2025-09-06T00:04:46.102109493Z" level=info msg="cleaning up dead shim" Sep 6 00:04:46.118341 env[1935]: time="2025-09-06T00:04:46.118270291Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4763 runtime=io.containerd.runc.v2\n" Sep 6 00:04:46.679381 kubelet[3036]: E0906 00:04:46.679304 3036 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:04:46.840997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5-rootfs.mount: Deactivated successfully. Sep 6 00:04:46.850927 env[1935]: time="2025-09-06T00:04:46.850837277Z" level=info msg="CreateContainer within sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:04:46.895043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1306640940.mount: Deactivated successfully. Sep 6 00:04:46.904138 env[1935]: time="2025-09-06T00:04:46.904061489Z" level=info msg="CreateContainer within sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922\"" Sep 6 00:04:46.907197 env[1935]: time="2025-09-06T00:04:46.905563030Z" level=info msg="StartContainer for \"333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922\"" Sep 6 00:04:47.060832 env[1935]: time="2025-09-06T00:04:47.060668783Z" level=info msg="StartContainer for \"333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922\" returns successfully" Sep 6 00:04:47.115459 env[1935]: time="2025-09-06T00:04:47.115366589Z" level=info msg="shim disconnected" id=333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922 Sep 6 00:04:47.115459 env[1935]: time="2025-09-06T00:04:47.115440945Z" level=warning msg="cleaning up after shim disconnected" id=333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922 namespace=k8s.io Sep 6 00:04:47.115822 env[1935]: time="2025-09-06T00:04:47.115463314Z" level=info msg="cleaning up dead shim" Sep 6 00:04:47.129833 env[1935]: time="2025-09-06T00:04:47.129766121Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4819 runtime=io.containerd.runc.v2\n" Sep 6 00:04:47.433262 kubelet[3036]: I0906 00:04:47.433073 3036 setters.go:600] "Node became not ready" node="ip-172-31-29-77" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:04:47Z","lastTransitionTime":"2025-09-06T00:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:04:47.842823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922-rootfs.mount: Deactivated successfully. Sep 6 00:04:47.847962 env[1935]: time="2025-09-06T00:04:47.847873996Z" level=info msg="CreateContainer within sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:04:47.875377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount680324226.mount: Deactivated successfully. Sep 6 00:04:47.897512 env[1935]: time="2025-09-06T00:04:47.897216987Z" level=info msg="CreateContainer within sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\"" Sep 6 00:04:47.898039 env[1935]: time="2025-09-06T00:04:47.897990246Z" level=info msg="StartContainer for \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\"" Sep 6 00:04:48.021260 env[1935]: time="2025-09-06T00:04:48.017436445Z" level=info msg="StartContainer for \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\" returns successfully" Sep 6 00:04:50.286497 systemd[1]: run-containerd-runc-k8s.io-fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d-runc.sTVuQv.mount: Deactivated successfully. Sep 6 00:04:50.408512 kubelet[3036]: E0906 00:04:50.408413 3036 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:36826->127.0.0.1:40105: write tcp 172.31.29.77:10250->172.31.29.77:47704: write: connection reset by peer Sep 6 00:04:52.520086 systemd[1]: run-containerd-runc-k8s.io-fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d-runc.DYdv2z.mount: Deactivated successfully. Sep 6 00:04:52.754392 systemd-networkd[1599]: lxc_health: Link UP Sep 6 00:04:52.767752 (udev-worker)[5367]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:04:52.777550 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:04:52.772709 systemd-networkd[1599]: lxc_health: Gained carrier Sep 6 00:04:53.462123 kubelet[3036]: I0906 00:04:53.462021 3036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mtk55" podStartSLOduration=10.461996121 podStartE2EDuration="10.461996121s" podCreationTimestamp="2025-09-06 00:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:48.976544151 +0000 UTC m=+42.862054441" watchObservedRunningTime="2025-09-06 00:04:53.461996121 +0000 UTC m=+47.347506399" Sep 6 00:04:54.190086 systemd-networkd[1599]: lxc_health: Gained IPv6LL Sep 6 00:04:54.748671 systemd[1]: run-containerd-runc-k8s.io-fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d-runc.QY5gdM.mount: Deactivated successfully. Sep 6 00:04:57.052407 systemd[1]: run-containerd-runc-k8s.io-fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d-runc.kfHXSB.mount: Deactivated successfully. Sep 6 00:04:59.323563 systemd[1]: run-containerd-runc-k8s.io-fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d-runc.GWOJEd.mount: Deactivated successfully. Sep 6 00:04:59.834725 sudo[2201]: pam_unix(sudo:session): session closed for user root Sep 6 00:04:59.860344 sshd[2197]: pam_unix(sshd:session): session closed for user core Sep 6 00:04:59.866490 systemd[1]: sshd@4-172.31.29.77:22-147.75.109.163:47474.service: Deactivated successfully. Sep 6 00:04:59.868058 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:04:59.868337 systemd-logind[1921]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:04:59.873035 systemd-logind[1921]: Removed session 5. Sep 6 00:05:02.873962 amazon-ssm-agent[1904]: 2025-09-06 00:05:02 INFO [HealthCheck] HealthCheck reporting agent health. Sep 6 00:05:06.358097 env[1935]: time="2025-09-06T00:05:06.357837385Z" level=info msg="StopPodSandbox for \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\"" Sep 6 00:05:06.358984 env[1935]: time="2025-09-06T00:05:06.358041008Z" level=info msg="TearDown network for sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" successfully" Sep 6 00:05:06.358984 env[1935]: time="2025-09-06T00:05:06.358785926Z" level=info msg="StopPodSandbox for \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" returns successfully" Sep 6 00:05:06.359891 env[1935]: time="2025-09-06T00:05:06.359737407Z" level=info msg="RemovePodSandbox for \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\"" Sep 6 00:05:06.359891 env[1935]: time="2025-09-06T00:05:06.359817613Z" level=info msg="Forcibly stopping sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\"" Sep 6 00:05:06.360341 env[1935]: time="2025-09-06T00:05:06.360208756Z" level=info msg="TearDown network for sandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" successfully" Sep 6 00:05:06.367953 env[1935]: time="2025-09-06T00:05:06.367886206Z" level=info msg="RemovePodSandbox \"33f6622b54f4a604caac3441e795a6750e76ea444901496a25af45ca948debd1\" returns successfully" Sep 6 00:05:41.197955 systemd[1]: Started sshd@5-172.31.29.77:22-147.75.109.163:42844.service. Sep 6 00:05:41.369750 sshd[5490]: Accepted publickey for core from 147.75.109.163 port 42844 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:41.372838 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:41.382731 systemd-logind[1921]: New session 6 of user core. Sep 6 00:05:41.383067 systemd[1]: Started session-6.scope. Sep 6 00:05:41.770360 sshd[5490]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:41.775357 systemd[1]: sshd@5-172.31.29.77:22-147.75.109.163:42844.service: Deactivated successfully. Sep 6 00:05:41.778177 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:05:41.779804 systemd-logind[1921]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:05:41.783090 systemd-logind[1921]: Removed session 6. Sep 6 00:05:46.795375 systemd[1]: Started sshd@6-172.31.29.77:22-147.75.109.163:42856.service. Sep 6 00:05:46.968786 sshd[5520]: Accepted publickey for core from 147.75.109.163 port 42856 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:46.971958 sshd[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:46.980143 systemd-logind[1921]: New session 7 of user core. Sep 6 00:05:46.980908 systemd[1]: Started session-7.scope. Sep 6 00:05:47.239010 sshd[5520]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:47.243913 systemd-logind[1921]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:05:47.245680 systemd[1]: sshd@6-172.31.29.77:22-147.75.109.163:42856.service: Deactivated successfully. Sep 6 00:05:47.247218 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:05:47.249269 systemd-logind[1921]: Removed session 7. Sep 6 00:05:52.265864 systemd[1]: Started sshd@7-172.31.29.77:22-147.75.109.163:37496.service. Sep 6 00:05:52.439945 sshd[5533]: Accepted publickey for core from 147.75.109.163 port 37496 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:52.442651 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:52.454858 systemd-logind[1921]: New session 8 of user core. Sep 6 00:05:52.455930 systemd[1]: Started session-8.scope. Sep 6 00:05:52.705719 sshd[5533]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:52.711496 systemd-logind[1921]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:05:52.713124 systemd[1]: sshd@7-172.31.29.77:22-147.75.109.163:37496.service: Deactivated successfully. Sep 6 00:05:52.715880 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:05:52.717948 systemd-logind[1921]: Removed session 8. Sep 6 00:05:57.732834 systemd[1]: Started sshd@8-172.31.29.77:22-147.75.109.163:37500.service. Sep 6 00:05:57.919564 sshd[5547]: Accepted publickey for core from 147.75.109.163 port 37500 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:57.922841 sshd[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:57.930718 systemd-logind[1921]: New session 9 of user core. Sep 6 00:05:57.931764 systemd[1]: Started session-9.scope. Sep 6 00:05:58.189174 sshd[5547]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:58.194653 systemd-logind[1921]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:05:58.196409 systemd[1]: sshd@8-172.31.29.77:22-147.75.109.163:37500.service: Deactivated successfully. Sep 6 00:05:58.197899 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:05:58.199602 systemd-logind[1921]: Removed session 9. Sep 6 00:05:58.212360 systemd[1]: Started sshd@9-172.31.29.77:22-147.75.109.163:37516.service. Sep 6 00:05:58.383131 sshd[5562]: Accepted publickey for core from 147.75.109.163 port 37516 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:58.386569 sshd[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:58.396133 systemd[1]: Started session-10.scope. Sep 6 00:05:58.396719 systemd-logind[1921]: New session 10 of user core. Sep 6 00:05:58.706177 sshd[5562]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:58.712702 systemd-logind[1921]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:05:58.715741 systemd[1]: sshd@9-172.31.29.77:22-147.75.109.163:37516.service: Deactivated successfully. Sep 6 00:05:58.717749 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:05:58.721008 systemd-logind[1921]: Removed session 10. Sep 6 00:05:58.740393 systemd[1]: Started sshd@10-172.31.29.77:22-147.75.109.163:37518.service. Sep 6 00:05:58.924023 sshd[5573]: Accepted publickey for core from 147.75.109.163 port 37518 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:58.926618 sshd[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:58.936123 systemd-logind[1921]: New session 11 of user core. Sep 6 00:05:58.941282 systemd[1]: Started session-11.scope. Sep 6 00:05:59.183705 sshd[5573]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:59.189609 systemd[1]: sshd@10-172.31.29.77:22-147.75.109.163:37518.service: Deactivated successfully. Sep 6 00:05:59.190123 systemd-logind[1921]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:05:59.193095 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:05:59.194736 systemd-logind[1921]: Removed session 11. Sep 6 00:06:04.210227 systemd[1]: Started sshd@11-172.31.29.77:22-147.75.109.163:60702.service. Sep 6 00:06:04.386536 sshd[5586]: Accepted publickey for core from 147.75.109.163 port 60702 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:04.388876 sshd[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:04.398055 systemd[1]: Started session-12.scope. Sep 6 00:06:04.399000 systemd-logind[1921]: New session 12 of user core. Sep 6 00:06:04.655339 sshd[5586]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:04.660527 systemd-logind[1921]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:06:04.662044 systemd[1]: sshd@11-172.31.29.77:22-147.75.109.163:60702.service: Deactivated successfully. Sep 6 00:06:04.663669 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:06:04.664661 systemd-logind[1921]: Removed session 12. Sep 6 00:06:09.681340 systemd[1]: Started sshd@12-172.31.29.77:22-147.75.109.163:60716.service. Sep 6 00:06:09.855518 sshd[5603]: Accepted publickey for core from 147.75.109.163 port 60716 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:09.858708 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:09.868107 systemd[1]: Started session-13.scope. Sep 6 00:06:09.868131 systemd-logind[1921]: New session 13 of user core. Sep 6 00:06:10.133833 sshd[5603]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:10.139053 systemd[1]: sshd@12-172.31.29.77:22-147.75.109.163:60716.service: Deactivated successfully. Sep 6 00:06:10.141307 systemd-logind[1921]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:06:10.141530 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:06:10.145046 systemd-logind[1921]: Removed session 13. Sep 6 00:06:15.159391 systemd[1]: Started sshd@13-172.31.29.77:22-147.75.109.163:51880.service. Sep 6 00:06:15.335501 sshd[5618]: Accepted publickey for core from 147.75.109.163 port 51880 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:15.338370 sshd[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:15.348736 systemd[1]: Started session-14.scope. Sep 6 00:06:15.350284 systemd-logind[1921]: New session 14 of user core. Sep 6 00:06:15.598336 sshd[5618]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:15.603994 systemd-logind[1921]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:06:15.605340 systemd[1]: sshd@13-172.31.29.77:22-147.75.109.163:51880.service: Deactivated successfully. Sep 6 00:06:15.607320 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:06:15.608747 systemd-logind[1921]: Removed session 14. Sep 6 00:06:20.624983 systemd[1]: Started sshd@14-172.31.29.77:22-147.75.109.163:44174.service. Sep 6 00:06:20.801575 sshd[5631]: Accepted publickey for core from 147.75.109.163 port 44174 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:20.802746 sshd[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:20.810345 systemd-logind[1921]: New session 15 of user core. Sep 6 00:06:20.811721 systemd[1]: Started session-15.scope. Sep 6 00:06:21.062565 sshd[5631]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:21.068997 systemd[1]: sshd@14-172.31.29.77:22-147.75.109.163:44174.service: Deactivated successfully. Sep 6 00:06:21.070867 systemd-logind[1921]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:06:21.072693 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:06:21.075624 systemd-logind[1921]: Removed session 15. Sep 6 00:06:21.089710 systemd[1]: Started sshd@15-172.31.29.77:22-147.75.109.163:44182.service. Sep 6 00:06:21.272259 sshd[5644]: Accepted publickey for core from 147.75.109.163 port 44182 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:21.275369 sshd[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:21.284072 systemd-logind[1921]: New session 16 of user core. Sep 6 00:06:21.284760 systemd[1]: Started session-16.scope. Sep 6 00:06:21.628525 sshd[5644]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:21.633593 systemd[1]: sshd@15-172.31.29.77:22-147.75.109.163:44182.service: Deactivated successfully. Sep 6 00:06:21.635604 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:06:21.635647 systemd-logind[1921]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:06:21.637976 systemd-logind[1921]: Removed session 16. Sep 6 00:06:21.652352 systemd[1]: Started sshd@16-172.31.29.77:22-147.75.109.163:44190.service. Sep 6 00:06:21.820382 sshd[5655]: Accepted publickey for core from 147.75.109.163 port 44190 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:21.823559 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:21.832392 systemd[1]: Started session-17.scope. Sep 6 00:06:21.832711 systemd-logind[1921]: New session 17 of user core. Sep 6 00:06:24.258070 sshd[5655]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:24.263709 systemd-logind[1921]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:06:24.264102 systemd[1]: sshd@16-172.31.29.77:22-147.75.109.163:44190.service: Deactivated successfully. Sep 6 00:06:24.265714 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:06:24.267885 systemd-logind[1921]: Removed session 17. Sep 6 00:06:24.284879 systemd[1]: Started sshd@17-172.31.29.77:22-147.75.109.163:44206.service. Sep 6 00:06:24.472697 sshd[5673]: Accepted publickey for core from 147.75.109.163 port 44206 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:24.475836 sshd[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:24.483334 systemd-logind[1921]: New session 18 of user core. Sep 6 00:06:24.485058 systemd[1]: Started session-18.scope. Sep 6 00:06:24.992819 sshd[5673]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:24.998286 systemd[1]: sshd@17-172.31.29.77:22-147.75.109.163:44206.service: Deactivated successfully. Sep 6 00:06:25.000973 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:06:25.001694 systemd-logind[1921]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:06:25.004815 systemd-logind[1921]: Removed session 18. Sep 6 00:06:25.018795 systemd[1]: Started sshd@18-172.31.29.77:22-147.75.109.163:44210.service. Sep 6 00:06:25.189348 sshd[5684]: Accepted publickey for core from 147.75.109.163 port 44210 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:25.192990 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:25.202020 systemd[1]: Started session-19.scope. Sep 6 00:06:25.202792 systemd-logind[1921]: New session 19 of user core. Sep 6 00:06:25.457117 sshd[5684]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:25.462070 systemd[1]: sshd@18-172.31.29.77:22-147.75.109.163:44210.service: Deactivated successfully. Sep 6 00:06:25.465102 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:06:25.465983 systemd-logind[1921]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:06:25.468142 systemd-logind[1921]: Removed session 19. Sep 6 00:06:30.483202 systemd[1]: Started sshd@19-172.31.29.77:22-147.75.109.163:55410.service. Sep 6 00:06:30.654413 sshd[5697]: Accepted publickey for core from 147.75.109.163 port 55410 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:30.656951 sshd[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:30.665346 systemd-logind[1921]: New session 20 of user core. Sep 6 00:06:30.666571 systemd[1]: Started session-20.scope. Sep 6 00:06:30.909581 sshd[5697]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:30.914578 systemd-logind[1921]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:06:30.915122 systemd[1]: sshd@19-172.31.29.77:22-147.75.109.163:55410.service: Deactivated successfully. Sep 6 00:06:30.916747 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:06:30.917818 systemd-logind[1921]: Removed session 20. Sep 6 00:06:35.936167 systemd[1]: Started sshd@20-172.31.29.77:22-147.75.109.163:55412.service. Sep 6 00:06:36.107321 sshd[5713]: Accepted publickey for core from 147.75.109.163 port 55412 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:36.109833 sshd[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:36.117335 systemd-logind[1921]: New session 21 of user core. Sep 6 00:06:36.119111 systemd[1]: Started session-21.scope. Sep 6 00:06:36.379450 sshd[5713]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:36.384409 systemd[1]: sshd@20-172.31.29.77:22-147.75.109.163:55412.service: Deactivated successfully. Sep 6 00:06:36.386459 systemd-logind[1921]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:06:36.386621 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:06:36.390178 systemd-logind[1921]: Removed session 21. Sep 6 00:06:41.405079 systemd[1]: Started sshd@21-172.31.29.77:22-147.75.109.163:34106.service. Sep 6 00:06:41.584790 sshd[5728]: Accepted publickey for core from 147.75.109.163 port 34106 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:41.586551 sshd[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:41.596317 systemd[1]: Started session-22.scope. Sep 6 00:06:41.597126 systemd-logind[1921]: New session 22 of user core. Sep 6 00:06:41.864758 sshd[5728]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:41.869352 systemd[1]: sshd@21-172.31.29.77:22-147.75.109.163:34106.service: Deactivated successfully. Sep 6 00:06:41.870834 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:06:41.872402 systemd-logind[1921]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:06:41.874183 systemd-logind[1921]: Removed session 22. Sep 6 00:06:46.888955 systemd[1]: Started sshd@22-172.31.29.77:22-147.75.109.163:34120.service. Sep 6 00:06:47.061778 sshd[5741]: Accepted publickey for core from 147.75.109.163 port 34120 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:47.064892 sshd[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:47.073937 systemd[1]: Started session-23.scope. Sep 6 00:06:47.074592 systemd-logind[1921]: New session 23 of user core. Sep 6 00:06:47.318502 sshd[5741]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:47.323529 systemd[1]: sshd@22-172.31.29.77:22-147.75.109.163:34120.service: Deactivated successfully. Sep 6 00:06:47.325863 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:06:47.326831 systemd-logind[1921]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:06:47.328856 systemd-logind[1921]: Removed session 23. Sep 6 00:06:47.344048 systemd[1]: Started sshd@23-172.31.29.77:22-147.75.109.163:34136.service. Sep 6 00:06:47.522494 sshd[5754]: Accepted publickey for core from 147.75.109.163 port 34136 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:47.525178 sshd[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:47.533679 systemd-logind[1921]: New session 24 of user core. Sep 6 00:06:47.534625 systemd[1]: Started session-24.scope. Sep 6 00:06:50.133874 env[1935]: time="2025-09-06T00:06:50.133747564Z" level=info msg="StopContainer for \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\" with timeout 30 (s)" Sep 6 00:06:50.136534 env[1935]: time="2025-09-06T00:06:50.136476097Z" level=info msg="Stop container \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\" with signal terminated" Sep 6 00:06:50.154826 systemd[1]: run-containerd-runc-k8s.io-fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d-runc.wjkTMv.mount: Deactivated successfully. Sep 6 00:06:50.206875 env[1935]: time="2025-09-06T00:06:50.206796489Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:06:50.218291 env[1935]: time="2025-09-06T00:06:50.218206273Z" level=info msg="StopContainer for \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\" with timeout 2 (s)" Sep 6 00:06:50.224166 env[1935]: time="2025-09-06T00:06:50.222108669Z" level=info msg="Stop container \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\" with signal terminated" Sep 6 00:06:50.252464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff-rootfs.mount: Deactivated successfully. Sep 6 00:06:50.263605 systemd-networkd[1599]: lxc_health: Link DOWN Sep 6 00:06:50.263625 systemd-networkd[1599]: lxc_health: Lost carrier Sep 6 00:06:50.290301 env[1935]: time="2025-09-06T00:06:50.278358605Z" level=info msg="shim disconnected" id=8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff Sep 6 00:06:50.290301 env[1935]: time="2025-09-06T00:06:50.278433644Z" level=warning msg="cleaning up after shim disconnected" id=8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff namespace=k8s.io Sep 6 00:06:50.290301 env[1935]: time="2025-09-06T00:06:50.278455928Z" level=info msg="cleaning up dead shim" Sep 6 00:06:50.310186 env[1935]: time="2025-09-06T00:06:50.310108179Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5809 runtime=io.containerd.runc.v2\n" Sep 6 00:06:50.314879 env[1935]: time="2025-09-06T00:06:50.314799995Z" level=info msg="StopContainer for \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\" returns successfully" Sep 6 00:06:50.315572 env[1935]: time="2025-09-06T00:06:50.315492380Z" level=info msg="StopPodSandbox for \"7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9\"" Sep 6 00:06:50.315855 env[1935]: time="2025-09-06T00:06:50.315595955Z" level=info msg="Container to stop \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:50.319844 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9-shm.mount: Deactivated successfully. Sep 6 00:06:50.347192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d-rootfs.mount: Deactivated successfully. Sep 6 00:06:50.370271 env[1935]: time="2025-09-06T00:06:50.370164845Z" level=info msg="shim disconnected" id=fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d Sep 6 00:06:50.370271 env[1935]: time="2025-09-06T00:06:50.370255376Z" level=warning msg="cleaning up after shim disconnected" id=fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d namespace=k8s.io Sep 6 00:06:50.370613 env[1935]: time="2025-09-06T00:06:50.370279401Z" level=info msg="cleaning up dead shim" Sep 6 00:06:50.388017 env[1935]: time="2025-09-06T00:06:50.386814057Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5852 runtime=io.containerd.runc.v2\n" Sep 6 00:06:50.390977 env[1935]: time="2025-09-06T00:06:50.390912371Z" level=info msg="StopContainer for \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\" returns successfully" Sep 6 00:06:50.391927 env[1935]: time="2025-09-06T00:06:50.391860448Z" level=info msg="StopPodSandbox for \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\"" Sep 6 00:06:50.392071 env[1935]: time="2025-09-06T00:06:50.391979335Z" level=info msg="Container to stop \"e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:50.392071 env[1935]: time="2025-09-06T00:06:50.392011376Z" level=info msg="Container to stop \"333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:50.392071 env[1935]: time="2025-09-06T00:06:50.392040237Z" level=info msg="Container to stop \"4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:50.392457 env[1935]: time="2025-09-06T00:06:50.392068642Z" level=info msg="Container to stop \"87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:50.392457 env[1935]: time="2025-09-06T00:06:50.392097383Z" level=info msg="Container to stop \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:50.414055 env[1935]: time="2025-09-06T00:06:50.413975547Z" level=info msg="shim disconnected" id=7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9 Sep 6 00:06:50.414374 env[1935]: time="2025-09-06T00:06:50.414054329Z" level=warning msg="cleaning up after shim disconnected" id=7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9 namespace=k8s.io Sep 6 00:06:50.414374 env[1935]: time="2025-09-06T00:06:50.414077430Z" level=info msg="cleaning up dead shim" Sep 6 00:06:50.438618 env[1935]: time="2025-09-06T00:06:50.438555759Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5881 runtime=io.containerd.runc.v2\n" Sep 6 00:06:50.439148 env[1935]: time="2025-09-06T00:06:50.439087591Z" level=info msg="TearDown network for sandbox \"7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9\" successfully" Sep 6 00:06:50.439307 env[1935]: time="2025-09-06T00:06:50.439139673Z" level=info msg="StopPodSandbox for \"7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9\" returns successfully" Sep 6 00:06:50.474800 env[1935]: time="2025-09-06T00:06:50.474631582Z" level=info msg="shim disconnected" id=5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6 Sep 6 00:06:50.474800 env[1935]: time="2025-09-06T00:06:50.474711661Z" level=warning msg="cleaning up after shim disconnected" id=5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6 namespace=k8s.io Sep 6 00:06:50.474800 env[1935]: time="2025-09-06T00:06:50.474734605Z" level=info msg="cleaning up dead shim" Sep 6 00:06:50.489583 env[1935]: time="2025-09-06T00:06:50.489511414Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5908 runtime=io.containerd.runc.v2\n" Sep 6 00:06:50.490148 env[1935]: time="2025-09-06T00:06:50.490098651Z" level=info msg="TearDown network for sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" successfully" Sep 6 00:06:50.490464 env[1935]: time="2025-09-06T00:06:50.490148945Z" level=info msg="StopPodSandbox for \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" returns successfully" Sep 6 00:06:50.537889 kubelet[3036]: I0906 00:06:50.537822 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cab44746-4338-4010-9e59-dc8c9532c501-cilium-config-path\") pod \"cab44746-4338-4010-9e59-dc8c9532c501\" (UID: \"cab44746-4338-4010-9e59-dc8c9532c501\") " Sep 6 00:06:50.538704 kubelet[3036]: I0906 00:06:50.538655 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkzpj\" (UniqueName: \"kubernetes.io/projected/cab44746-4338-4010-9e59-dc8c9532c501-kube-api-access-wkzpj\") pod \"cab44746-4338-4010-9e59-dc8c9532c501\" (UID: \"cab44746-4338-4010-9e59-dc8c9532c501\") " Sep 6 00:06:50.544102 kubelet[3036]: I0906 00:06:50.544047 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab44746-4338-4010-9e59-dc8c9532c501-kube-api-access-wkzpj" (OuterVolumeSpecName: "kube-api-access-wkzpj") pod "cab44746-4338-4010-9e59-dc8c9532c501" (UID: "cab44746-4338-4010-9e59-dc8c9532c501"). InnerVolumeSpecName "kube-api-access-wkzpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:06:50.547908 kubelet[3036]: I0906 00:06:50.547848 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cab44746-4338-4010-9e59-dc8c9532c501-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cab44746-4338-4010-9e59-dc8c9532c501" (UID: "cab44746-4338-4010-9e59-dc8c9532c501"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:06:50.641026 kubelet[3036]: I0906 00:06:50.639718 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m4h6\" (UniqueName: \"kubernetes.io/projected/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-kube-api-access-7m4h6\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.641422 kubelet[3036]: I0906 00:06:50.641390 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-clustermesh-secrets\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.642138 kubelet[3036]: I0906 00:06:50.642082 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-xtables-lock\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.642415 kubelet[3036]: I0906 00:06:50.642370 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cilium-cgroup\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.642624 kubelet[3036]: I0906 00:06:50.642572 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-hubble-tls\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.642806 kubelet[3036]: I0906 00:06:50.642779 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-host-proc-sys-net\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.642984 kubelet[3036]: I0906 00:06:50.642959 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-host-proc-sys-kernel\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.643172 kubelet[3036]: I0906 00:06:50.643146 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cilium-config-path\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.643572 kubelet[3036]: I0906 00:06:50.643529 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-hostproc\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.643797 kubelet[3036]: I0906 00:06:50.643727 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cilium-run\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.643948 kubelet[3036]: I0906 00:06:50.643924 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-lib-modules\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.644114 kubelet[3036]: I0906 00:06:50.644089 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-etc-cni-netd\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.644341 kubelet[3036]: I0906 00:06:50.644278 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-bpf-maps\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.645021 kubelet[3036]: I0906 00:06:50.644989 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cni-path\") pod \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\" (UID: \"5cd06277-8ae6-43bd-b8e2-d1b7109fb58f\") " Sep 6 00:06:50.645390 kubelet[3036]: I0906 00:06:50.645362 3036 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cab44746-4338-4010-9e59-dc8c9532c501-cilium-config-path\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.645550 kubelet[3036]: I0906 00:06:50.645528 3036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkzpj\" (UniqueName: \"kubernetes.io/projected/cab44746-4338-4010-9e59-dc8c9532c501-kube-api-access-wkzpj\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.645746 kubelet[3036]: I0906 00:06:50.644540 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:50.646221 kubelet[3036]: I0906 00:06:50.644571 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:50.646221 kubelet[3036]: I0906 00:06:50.644597 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:50.646221 kubelet[3036]: I0906 00:06:50.645698 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cni-path" (OuterVolumeSpecName: "cni-path") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:50.646513 kubelet[3036]: I0906 00:06:50.646283 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-kube-api-access-7m4h6" (OuterVolumeSpecName: "kube-api-access-7m4h6") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "kube-api-access-7m4h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:06:50.646513 kubelet[3036]: I0906 00:06:50.646342 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-hostproc" (OuterVolumeSpecName: "hostproc") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:50.646513 kubelet[3036]: I0906 00:06:50.646381 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:50.646513 kubelet[3036]: I0906 00:06:50.646418 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:50.646957 kubelet[3036]: I0906 00:06:50.646915 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:50.647123 kubelet[3036]: I0906 00:06:50.647097 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:50.647299 kubelet[3036]: I0906 00:06:50.647273 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:50.651830 kubelet[3036]: I0906 00:06:50.651763 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:06:50.653169 kubelet[3036]: I0906 00:06:50.653098 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:06:50.656408 kubelet[3036]: I0906 00:06:50.656357 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" (UID: "5cd06277-8ae6-43bd-b8e2-d1b7109fb58f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:06:50.746659 kubelet[3036]: I0906 00:06:50.746620 3036 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-hostproc\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.746902 kubelet[3036]: I0906 00:06:50.746878 3036 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-host-proc-sys-kernel\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.747064 kubelet[3036]: I0906 00:06:50.747042 3036 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cilium-config-path\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.747187 kubelet[3036]: I0906 00:06:50.747166 3036 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cilium-run\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.747359 kubelet[3036]: I0906 00:06:50.747337 3036 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-lib-modules\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.747484 kubelet[3036]: I0906 00:06:50.747462 3036 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-etc-cni-netd\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.747602 kubelet[3036]: I0906 00:06:50.747580 3036 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-bpf-maps\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.747718 kubelet[3036]: I0906 00:06:50.747697 3036 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cni-path\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.747842 kubelet[3036]: I0906 00:06:50.747820 3036 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-clustermesh-secrets\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.747960 kubelet[3036]: I0906 00:06:50.747939 3036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m4h6\" (UniqueName: \"kubernetes.io/projected/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-kube-api-access-7m4h6\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.748073 kubelet[3036]: I0906 00:06:50.748050 3036 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-xtables-lock\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.748192 kubelet[3036]: I0906 00:06:50.748170 3036 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-cilium-cgroup\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.748328 kubelet[3036]: I0906 00:06:50.748306 3036 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-hubble-tls\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:50.748465 kubelet[3036]: I0906 00:06:50.748443 3036 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f-host-proc-sys-net\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:51.146011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6-rootfs.mount: Deactivated successfully. Sep 6 00:06:51.146594 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6-shm.mount: Deactivated successfully. Sep 6 00:06:51.146996 systemd[1]: var-lib-kubelet-pods-5cd06277\x2d8ae6\x2d43bd\x2db8e2\x2dd1b7109fb58f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7m4h6.mount: Deactivated successfully. Sep 6 00:06:51.147386 systemd[1]: var-lib-kubelet-pods-5cd06277\x2d8ae6\x2d43bd\x2db8e2\x2dd1b7109fb58f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:06:51.147758 systemd[1]: var-lib-kubelet-pods-5cd06277\x2d8ae6\x2d43bd\x2db8e2\x2dd1b7109fb58f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:06:51.148105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9-rootfs.mount: Deactivated successfully. Sep 6 00:06:51.148471 systemd[1]: var-lib-kubelet-pods-cab44746\x2d4338\x2d4010\x2d9e59\x2ddc8c9532c501-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwkzpj.mount: Deactivated successfully. Sep 6 00:06:51.197735 kubelet[3036]: I0906 00:06:51.197688 3036 scope.go:117] "RemoveContainer" containerID="8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff" Sep 6 00:06:51.201021 env[1935]: time="2025-09-06T00:06:51.200954560Z" level=info msg="RemoveContainer for \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\"" Sep 6 00:06:51.209345 env[1935]: time="2025-09-06T00:06:51.209275596Z" level=info msg="RemoveContainer for \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\" returns successfully" Sep 6 00:06:51.215533 kubelet[3036]: I0906 00:06:51.215499 3036 scope.go:117] "RemoveContainer" containerID="8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff" Sep 6 00:06:51.216388 env[1935]: time="2025-09-06T00:06:51.216160046Z" level=error msg="ContainerStatus for \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\": not found" Sep 6 00:06:51.216991 kubelet[3036]: E0906 00:06:51.216928 3036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\": not found" containerID="8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff" Sep 6 00:06:51.218088 kubelet[3036]: I0906 00:06:51.218005 3036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff"} err="failed to get container status \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c53e4b78a7d95e254cf7465f3e296928772250cc171b62121f90e854ec920ff\": not found" Sep 6 00:06:51.218485 kubelet[3036]: I0906 00:06:51.218383 3036 scope.go:117] "RemoveContainer" containerID="fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d" Sep 6 00:06:51.224121 env[1935]: time="2025-09-06T00:06:51.223715447Z" level=info msg="RemoveContainer for \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\"" Sep 6 00:06:51.240934 env[1935]: time="2025-09-06T00:06:51.238450363Z" level=info msg="RemoveContainer for \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\" returns successfully" Sep 6 00:06:51.241889 kubelet[3036]: I0906 00:06:51.238857 3036 scope.go:117] "RemoveContainer" containerID="333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922" Sep 6 00:06:51.247273 env[1935]: time="2025-09-06T00:06:51.246764740Z" level=info msg="RemoveContainer for \"333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922\"" Sep 6 00:06:51.253216 env[1935]: time="2025-09-06T00:06:51.253156923Z" level=info msg="RemoveContainer for \"333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922\" returns successfully" Sep 6 00:06:51.253876 kubelet[3036]: I0906 00:06:51.253847 3036 scope.go:117] "RemoveContainer" containerID="e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5" Sep 6 00:06:51.256189 env[1935]: time="2025-09-06T00:06:51.255766341Z" level=info msg="RemoveContainer for \"e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5\"" Sep 6 00:06:51.265041 env[1935]: time="2025-09-06T00:06:51.264985760Z" level=info msg="RemoveContainer for \"e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5\" returns successfully" Sep 6 00:06:51.265995 kubelet[3036]: I0906 00:06:51.265963 3036 scope.go:117] "RemoveContainer" containerID="87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200" Sep 6 00:06:51.276500 env[1935]: time="2025-09-06T00:06:51.276453158Z" level=info msg="RemoveContainer for \"87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200\"" Sep 6 00:06:51.283780 env[1935]: time="2025-09-06T00:06:51.283709139Z" level=info msg="RemoveContainer for \"87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200\" returns successfully" Sep 6 00:06:51.284339 kubelet[3036]: I0906 00:06:51.284296 3036 scope.go:117] "RemoveContainer" containerID="4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c" Sep 6 00:06:51.286625 env[1935]: time="2025-09-06T00:06:51.286566676Z" level=info msg="RemoveContainer for \"4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c\"" Sep 6 00:06:51.293195 env[1935]: time="2025-09-06T00:06:51.293078635Z" level=info msg="RemoveContainer for \"4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c\" returns successfully" Sep 6 00:06:51.293964 kubelet[3036]: I0906 00:06:51.293913 3036 scope.go:117] "RemoveContainer" containerID="fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d" Sep 6 00:06:51.294594 env[1935]: time="2025-09-06T00:06:51.294470832Z" level=error msg="ContainerStatus for \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\": not found" Sep 6 00:06:51.295204 kubelet[3036]: E0906 00:06:51.294917 3036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\": not found" containerID="fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d" Sep 6 00:06:51.295204 kubelet[3036]: I0906 00:06:51.294987 3036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d"} err="failed to get container status \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe4d47e65c9923cee58b57544e6388b0a883978390cda1813e7b88e84e6e2e9d\": not found" Sep 6 00:06:51.295204 kubelet[3036]: I0906 00:06:51.295041 3036 scope.go:117] "RemoveContainer" containerID="333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922" Sep 6 00:06:51.295743 env[1935]: time="2025-09-06T00:06:51.295665996Z" level=error msg="ContainerStatus for \"333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922\": not found" Sep 6 00:06:51.296350 kubelet[3036]: E0906 00:06:51.296066 3036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922\": not found" containerID="333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922" Sep 6 00:06:51.296350 kubelet[3036]: I0906 00:06:51.296132 3036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922"} err="failed to get container status \"333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922\": rpc error: code = NotFound desc = an error occurred when try to find container \"333302427e87c72fdffb6b0d36861198ceb34f53df2dba50cc1779ace00f5922\": not found" Sep 6 00:06:51.296350 kubelet[3036]: I0906 00:06:51.296168 3036 scope.go:117] "RemoveContainer" containerID="e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5" Sep 6 00:06:51.296849 env[1935]: time="2025-09-06T00:06:51.296776653Z" level=error msg="ContainerStatus for \"e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5\": not found" Sep 6 00:06:51.297464 kubelet[3036]: E0906 00:06:51.297171 3036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5\": not found" containerID="e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5" Sep 6 00:06:51.297464 kubelet[3036]: I0906 00:06:51.297263 3036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5"} err="failed to get container status \"e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e28a45ce5618377065abf0d6de695439cd511196bae4f5bcd2cf1c73303ab6a5\": not found" Sep 6 00:06:51.297464 kubelet[3036]: I0906 00:06:51.297298 3036 scope.go:117] "RemoveContainer" containerID="87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200" Sep 6 00:06:51.298025 env[1935]: time="2025-09-06T00:06:51.297946928Z" level=error msg="ContainerStatus for \"87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200\": not found" Sep 6 00:06:51.298596 kubelet[3036]: E0906 00:06:51.298357 3036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200\": not found" containerID="87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200" Sep 6 00:06:51.298596 kubelet[3036]: I0906 00:06:51.298403 3036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200"} err="failed to get container status \"87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200\": rpc error: code = NotFound desc = an error occurred when try to find container \"87a6db1bd4dc69705f2112c97f1bab1c0cd8e4b627e4c594183d379e6a157200\": not found" Sep 6 00:06:51.298596 kubelet[3036]: I0906 00:06:51.298464 3036 scope.go:117] "RemoveContainer" containerID="4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c" Sep 6 00:06:51.299107 env[1935]: time="2025-09-06T00:06:51.299031197Z" level=error msg="ContainerStatus for \"4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c\": not found" Sep 6 00:06:51.299471 kubelet[3036]: E0906 00:06:51.299409 3036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c\": not found" containerID="4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c" Sep 6 00:06:51.299593 kubelet[3036]: I0906 00:06:51.299461 3036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c"} err="failed to get container status \"4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"4880499cdba4f9ab4056603e6772b69b1ec72c7c6c87e00da755073a7233ac7c\": not found" Sep 6 00:06:51.713611 kubelet[3036]: E0906 00:06:51.713548 3036 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:52.050609 sshd[5754]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:52.055047 systemd[1]: sshd@23-172.31.29.77:22-147.75.109.163:34136.service: Deactivated successfully. Sep 6 00:06:52.057714 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:06:52.058910 systemd-logind[1921]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:06:52.061838 systemd-logind[1921]: Removed session 24. Sep 6 00:06:52.076565 systemd[1]: Started sshd@24-172.31.29.77:22-147.75.109.163:39436.service. Sep 6 00:06:52.252680 sshd[5928]: Accepted publickey for core from 147.75.109.163 port 39436 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:52.255284 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:52.264941 systemd[1]: Started session-25.scope. Sep 6 00:06:52.265606 systemd-logind[1921]: New session 25 of user core. Sep 6 00:06:52.447466 kubelet[3036]: I0906 00:06:52.447330 3036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" path="/var/lib/kubelet/pods/5cd06277-8ae6-43bd-b8e2-d1b7109fb58f/volumes" Sep 6 00:06:52.448901 kubelet[3036]: I0906 00:06:52.448851 3036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cab44746-4338-4010-9e59-dc8c9532c501" path="/var/lib/kubelet/pods/cab44746-4338-4010-9e59-dc8c9532c501/volumes" Sep 6 00:06:53.330868 sshd[5928]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:53.336326 systemd-logind[1921]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:06:53.336840 systemd[1]: sshd@24-172.31.29.77:22-147.75.109.163:39436.service: Deactivated successfully. Sep 6 00:06:53.339588 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:06:53.340653 systemd-logind[1921]: Removed session 25. Sep 6 00:06:53.362101 systemd[1]: Started sshd@25-172.31.29.77:22-147.75.109.163:39446.service. Sep 6 00:06:53.380526 kubelet[3036]: E0906 00:06:53.380483 3036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" containerName="mount-cgroup" Sep 6 00:06:53.381129 kubelet[3036]: E0906 00:06:53.381106 3036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" containerName="mount-bpf-fs" Sep 6 00:06:53.381277 kubelet[3036]: E0906 00:06:53.381255 3036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" containerName="clean-cilium-state" Sep 6 00:06:53.381388 kubelet[3036]: E0906 00:06:53.381367 3036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" containerName="apply-sysctl-overwrites" Sep 6 00:06:53.381555 kubelet[3036]: E0906 00:06:53.381534 3036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" containerName="cilium-agent" Sep 6 00:06:53.381663 kubelet[3036]: E0906 00:06:53.381642 3036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cab44746-4338-4010-9e59-dc8c9532c501" containerName="cilium-operator" Sep 6 00:06:53.381822 kubelet[3036]: I0906 00:06:53.381800 3036 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd06277-8ae6-43bd-b8e2-d1b7109fb58f" containerName="cilium-agent" Sep 6 00:06:53.381936 kubelet[3036]: I0906 00:06:53.381915 3036 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab44746-4338-4010-9e59-dc8c9532c501" containerName="cilium-operator" Sep 6 00:06:53.561156 sshd[5939]: Accepted publickey for core from 147.75.109.163 port 39446 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:53.563709 sshd[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:53.569281 kubelet[3036]: I0906 00:06:53.568418 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-lib-modules\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.569281 kubelet[3036]: I0906 00:06:53.569192 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-host-proc-sys-net\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.570149 kubelet[3036]: I0906 00:06:53.569509 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-host-proc-sys-kernel\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.570149 kubelet[3036]: I0906 00:06:53.569565 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-xtables-lock\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.570149 kubelet[3036]: I0906 00:06:53.569602 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c41daa19-22bb-4972-8cd6-2cbff2b5b141-clustermesh-secrets\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.570149 kubelet[3036]: I0906 00:06:53.569640 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-ipsec-secrets\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.570149 kubelet[3036]: I0906 00:06:53.569682 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-cgroup\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.570149 kubelet[3036]: I0906 00:06:53.569716 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cni-path\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.570613 kubelet[3036]: I0906 00:06:53.569757 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-config-path\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.570613 kubelet[3036]: I0906 00:06:53.569794 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-bpf-maps\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.570613 kubelet[3036]: I0906 00:06:53.569837 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-hostproc\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.570613 kubelet[3036]: I0906 00:06:53.569873 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svw99\" (UniqueName: \"kubernetes.io/projected/c41daa19-22bb-4972-8cd6-2cbff2b5b141-kube-api-access-svw99\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.570613 kubelet[3036]: I0906 00:06:53.569913 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c41daa19-22bb-4972-8cd6-2cbff2b5b141-hubble-tls\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.570613 kubelet[3036]: I0906 00:06:53.569951 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-run\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.570997 kubelet[3036]: I0906 00:06:53.569989 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-etc-cni-netd\") pod \"cilium-9zj44\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " pod="kube-system/cilium-9zj44" Sep 6 00:06:53.572709 systemd-logind[1921]: New session 26 of user core. Sep 6 00:06:53.575033 systemd[1]: Started session-26.scope. Sep 6 00:06:53.904467 sshd[5939]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:53.912223 env[1935]: time="2025-09-06T00:06:53.911296877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9zj44,Uid:c41daa19-22bb-4972-8cd6-2cbff2b5b141,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:53.911444 systemd[1]: sshd@25-172.31.29.77:22-147.75.109.163:39446.service: Deactivated successfully. Sep 6 00:06:53.912844 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:06:53.914321 systemd-logind[1921]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:06:53.937107 systemd[1]: Started sshd@26-172.31.29.77:22-147.75.109.163:39462.service. Sep 6 00:06:53.945676 systemd-logind[1921]: Removed session 26. Sep 6 00:06:54.000795 env[1935]: time="2025-09-06T00:06:54.000592011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:54.000795 env[1935]: time="2025-09-06T00:06:54.000663365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:54.000795 env[1935]: time="2025-09-06T00:06:54.000688806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:54.001499 env[1935]: time="2025-09-06T00:06:54.001398951Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9 pid=5966 runtime=io.containerd.runc.v2 Sep 6 00:06:54.094347 env[1935]: time="2025-09-06T00:06:54.094220408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9zj44,Uid:c41daa19-22bb-4972-8cd6-2cbff2b5b141,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9\"" Sep 6 00:06:54.104313 env[1935]: time="2025-09-06T00:06:54.104211932Z" level=info msg="CreateContainer within sandbox \"6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:06:54.126915 sshd[5958]: Accepted publickey for core from 147.75.109.163 port 39462 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:54.127827 env[1935]: time="2025-09-06T00:06:54.127768700Z" level=info msg="CreateContainer within sandbox \"6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a35e7778a3610c1796ceaf27655fa42a2f5f34ddc660a93d8e5d0d332ff6ea4c\"" Sep 6 00:06:54.130102 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:54.133852 env[1935]: time="2025-09-06T00:06:54.132633670Z" level=info msg="StartContainer for \"a35e7778a3610c1796ceaf27655fa42a2f5f34ddc660a93d8e5d0d332ff6ea4c\"" Sep 6 00:06:54.142340 systemd-logind[1921]: New session 27 of user core. Sep 6 00:06:54.144458 systemd[1]: Started session-27.scope. Sep 6 00:06:54.304548 env[1935]: time="2025-09-06T00:06:54.304464781Z" level=info msg="StartContainer for \"a35e7778a3610c1796ceaf27655fa42a2f5f34ddc660a93d8e5d0d332ff6ea4c\" returns successfully" Sep 6 00:06:54.381157 env[1935]: time="2025-09-06T00:06:54.378830260Z" level=info msg="shim disconnected" id=a35e7778a3610c1796ceaf27655fa42a2f5f34ddc660a93d8e5d0d332ff6ea4c Sep 6 00:06:54.381157 env[1935]: time="2025-09-06T00:06:54.378902658Z" level=warning msg="cleaning up after shim disconnected" id=a35e7778a3610c1796ceaf27655fa42a2f5f34ddc660a93d8e5d0d332ff6ea4c namespace=k8s.io Sep 6 00:06:54.381157 env[1935]: time="2025-09-06T00:06:54.378924858Z" level=info msg="cleaning up dead shim" Sep 6 00:06:54.405992 env[1935]: time="2025-09-06T00:06:54.405898585Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6058 runtime=io.containerd.runc.v2\n" Sep 6 00:06:54.446071 kubelet[3036]: E0906 00:06:54.445979 3036 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-kzl77" podUID="3cf1f677-f2ec-44c1-a2e0-0637cad7beb6" Sep 6 00:06:55.226741 env[1935]: time="2025-09-06T00:06:55.226657618Z" level=info msg="StopPodSandbox for \"6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9\"" Sep 6 00:06:55.227481 env[1935]: time="2025-09-06T00:06:55.227432434Z" level=info msg="Container to stop \"a35e7778a3610c1796ceaf27655fa42a2f5f34ddc660a93d8e5d0d332ff6ea4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:55.231616 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9-shm.mount: Deactivated successfully. Sep 6 00:06:55.316053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9-rootfs.mount: Deactivated successfully. Sep 6 00:06:55.328966 env[1935]: time="2025-09-06T00:06:55.328865691Z" level=info msg="shim disconnected" id=6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9 Sep 6 00:06:55.328966 env[1935]: time="2025-09-06T00:06:55.328960254Z" level=warning msg="cleaning up after shim disconnected" id=6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9 namespace=k8s.io Sep 6 00:06:55.329360 env[1935]: time="2025-09-06T00:06:55.328984723Z" level=info msg="cleaning up dead shim" Sep 6 00:06:55.358457 env[1935]: time="2025-09-06T00:06:55.358385332Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6090 runtime=io.containerd.runc.v2\n" Sep 6 00:06:55.359000 env[1935]: time="2025-09-06T00:06:55.358943913Z" level=info msg="TearDown network for sandbox \"6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9\" successfully" Sep 6 00:06:55.359128 env[1935]: time="2025-09-06T00:06:55.358995622Z" level=info msg="StopPodSandbox for \"6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9\" returns successfully" Sep 6 00:06:55.487265 kubelet[3036]: I0906 00:06:55.487121 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-xtables-lock\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.488022 kubelet[3036]: I0906 00:06:55.487952 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-bpf-maps\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.488114 kubelet[3036]: I0906 00:06:55.488047 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svw99\" (UniqueName: \"kubernetes.io/projected/c41daa19-22bb-4972-8cd6-2cbff2b5b141-kube-api-access-svw99\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.488195 kubelet[3036]: I0906 00:06:55.488111 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-host-proc-sys-kernel\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.488359 kubelet[3036]: I0906 00:06:55.487223 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:55.488485 kubelet[3036]: I0906 00:06:55.488320 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:55.488597 kubelet[3036]: I0906 00:06:55.488350 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-ipsec-secrets\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.488745 kubelet[3036]: I0906 00:06:55.488719 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cni-path\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.488887 kubelet[3036]: I0906 00:06:55.488862 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-run\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.489043 kubelet[3036]: I0906 00:06:55.489018 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-etc-cni-netd\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.489192 kubelet[3036]: I0906 00:06:55.489163 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c41daa19-22bb-4972-8cd6-2cbff2b5b141-clustermesh-secrets\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.489393 kubelet[3036]: I0906 00:06:55.489368 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-cgroup\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.489541 kubelet[3036]: I0906 00:06:55.489516 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-config-path\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.489681 kubelet[3036]: I0906 00:06:55.489656 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-hostproc\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.489851 kubelet[3036]: I0906 00:06:55.489826 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-host-proc-sys-net\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.489999 kubelet[3036]: I0906 00:06:55.489974 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c41daa19-22bb-4972-8cd6-2cbff2b5b141-hubble-tls\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.490136 kubelet[3036]: I0906 00:06:55.490112 3036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-lib-modules\") pod \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\" (UID: \"c41daa19-22bb-4972-8cd6-2cbff2b5b141\") " Sep 6 00:06:55.490322 kubelet[3036]: I0906 00:06:55.490298 3036 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-xtables-lock\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.490466 kubelet[3036]: I0906 00:06:55.490444 3036 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-bpf-maps\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.498683 kubelet[3036]: I0906 00:06:55.489289 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:55.498865 kubelet[3036]: I0906 00:06:55.490621 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:55.498865 kubelet[3036]: I0906 00:06:55.490655 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cni-path" (OuterVolumeSpecName: "cni-path") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:55.498865 kubelet[3036]: I0906 00:06:55.490681 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:55.498865 kubelet[3036]: I0906 00:06:55.490707 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:55.498865 kubelet[3036]: I0906 00:06:55.494852 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-hostproc" (OuterVolumeSpecName: "hostproc") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:55.499179 kubelet[3036]: I0906 00:06:55.494918 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:55.499179 kubelet[3036]: I0906 00:06:55.498813 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:06:55.499422 kubelet[3036]: I0906 00:06:55.499120 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:06:55.503504 systemd[1]: var-lib-kubelet-pods-c41daa19\x2d22bb\x2d4972\x2d8cd6\x2d2cbff2b5b141-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsvw99.mount: Deactivated successfully. Sep 6 00:06:55.510491 systemd[1]: var-lib-kubelet-pods-c41daa19\x2d22bb\x2d4972\x2d8cd6\x2d2cbff2b5b141-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:06:55.516753 kubelet[3036]: I0906 00:06:55.516053 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41daa19-22bb-4972-8cd6-2cbff2b5b141-kube-api-access-svw99" (OuterVolumeSpecName: "kube-api-access-svw99") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "kube-api-access-svw99". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:06:55.517035 kubelet[3036]: I0906 00:06:55.516979 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41daa19-22bb-4972-8cd6-2cbff2b5b141-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:06:55.517138 kubelet[3036]: I0906 00:06:55.517105 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:06:55.517443 kubelet[3036]: I0906 00:06:55.517408 3036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41daa19-22bb-4972-8cd6-2cbff2b5b141-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c41daa19-22bb-4972-8cd6-2cbff2b5b141" (UID: "c41daa19-22bb-4972-8cd6-2cbff2b5b141"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:06:55.591527 kubelet[3036]: I0906 00:06:55.591465 3036 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c41daa19-22bb-4972-8cd6-2cbff2b5b141-clustermesh-secrets\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.591527 kubelet[3036]: I0906 00:06:55.591519 3036 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-cgroup\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.591762 kubelet[3036]: I0906 00:06:55.591543 3036 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-config-path\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.591762 kubelet[3036]: I0906 00:06:55.591568 3036 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-hostproc\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.591762 kubelet[3036]: I0906 00:06:55.591590 3036 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-etc-cni-netd\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.591762 kubelet[3036]: I0906 00:06:55.591612 3036 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-host-proc-sys-net\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.591762 kubelet[3036]: I0906 00:06:55.591635 3036 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c41daa19-22bb-4972-8cd6-2cbff2b5b141-hubble-tls\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.591762 kubelet[3036]: I0906 00:06:55.591659 3036 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-lib-modules\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.591762 kubelet[3036]: I0906 00:06:55.591680 3036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svw99\" (UniqueName: \"kubernetes.io/projected/c41daa19-22bb-4972-8cd6-2cbff2b5b141-kube-api-access-svw99\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.591762 kubelet[3036]: I0906 00:06:55.591701 3036 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-ipsec-secrets\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.592221 kubelet[3036]: I0906 00:06:55.591723 3036 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cni-path\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.592221 kubelet[3036]: I0906 00:06:55.591818 3036 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-host-proc-sys-kernel\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.592221 kubelet[3036]: I0906 00:06:55.591845 3036 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c41daa19-22bb-4972-8cd6-2cbff2b5b141-cilium-run\") on node \"ip-172-31-29-77\" DevicePath \"\"" Sep 6 00:06:55.699817 systemd[1]: var-lib-kubelet-pods-c41daa19\x2d22bb\x2d4972\x2d8cd6\x2d2cbff2b5b141-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:06:55.700096 systemd[1]: var-lib-kubelet-pods-c41daa19\x2d22bb\x2d4972\x2d8cd6\x2d2cbff2b5b141-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:06:56.230579 kubelet[3036]: I0906 00:06:56.230535 3036 scope.go:117] "RemoveContainer" containerID="a35e7778a3610c1796ceaf27655fa42a2f5f34ddc660a93d8e5d0d332ff6ea4c" Sep 6 00:06:56.237644 env[1935]: time="2025-09-06T00:06:56.237449283Z" level=info msg="RemoveContainer for \"a35e7778a3610c1796ceaf27655fa42a2f5f34ddc660a93d8e5d0d332ff6ea4c\"" Sep 6 00:06:56.246350 env[1935]: time="2025-09-06T00:06:56.246267389Z" level=info msg="RemoveContainer for \"a35e7778a3610c1796ceaf27655fa42a2f5f34ddc660a93d8e5d0d332ff6ea4c\" returns successfully" Sep 6 00:06:56.296910 kubelet[3036]: E0906 00:06:56.296852 3036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c41daa19-22bb-4972-8cd6-2cbff2b5b141" containerName="mount-cgroup" Sep 6 00:06:56.297146 kubelet[3036]: I0906 00:06:56.296953 3036 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41daa19-22bb-4972-8cd6-2cbff2b5b141" containerName="mount-cgroup" Sep 6 00:06:56.397154 kubelet[3036]: I0906 00:06:56.397105 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50882c30-2ec7-427e-9213-1dbdfe6f2f30-cilium-run\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.397154 kubelet[3036]: I0906 00:06:56.397167 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50882c30-2ec7-427e-9213-1dbdfe6f2f30-hostproc\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.397454 kubelet[3036]: I0906 00:06:56.397208 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50882c30-2ec7-427e-9213-1dbdfe6f2f30-cilium-cgroup\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.397454 kubelet[3036]: I0906 00:06:56.397281 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs79s\" (UniqueName: \"kubernetes.io/projected/50882c30-2ec7-427e-9213-1dbdfe6f2f30-kube-api-access-gs79s\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.397454 kubelet[3036]: I0906 00:06:56.397328 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50882c30-2ec7-427e-9213-1dbdfe6f2f30-lib-modules\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.397454 kubelet[3036]: I0906 00:06:56.397369 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/50882c30-2ec7-427e-9213-1dbdfe6f2f30-cilium-ipsec-secrets\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.397454 kubelet[3036]: I0906 00:06:56.397405 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50882c30-2ec7-427e-9213-1dbdfe6f2f30-hubble-tls\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.397454 kubelet[3036]: I0906 00:06:56.397443 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50882c30-2ec7-427e-9213-1dbdfe6f2f30-bpf-maps\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.397811 kubelet[3036]: I0906 00:06:56.397482 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50882c30-2ec7-427e-9213-1dbdfe6f2f30-xtables-lock\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.397811 kubelet[3036]: I0906 00:06:56.397516 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50882c30-2ec7-427e-9213-1dbdfe6f2f30-clustermesh-secrets\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.397811 kubelet[3036]: I0906 00:06:56.397558 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50882c30-2ec7-427e-9213-1dbdfe6f2f30-cilium-config-path\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.397811 kubelet[3036]: I0906 00:06:56.397601 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50882c30-2ec7-427e-9213-1dbdfe6f2f30-host-proc-sys-kernel\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.397811 kubelet[3036]: I0906 00:06:56.397639 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50882c30-2ec7-427e-9213-1dbdfe6f2f30-cni-path\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.397811 kubelet[3036]: I0906 00:06:56.397678 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50882c30-2ec7-427e-9213-1dbdfe6f2f30-etc-cni-netd\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.398158 kubelet[3036]: I0906 00:06:56.397718 3036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50882c30-2ec7-427e-9213-1dbdfe6f2f30-host-proc-sys-net\") pod \"cilium-42drv\" (UID: \"50882c30-2ec7-427e-9213-1dbdfe6f2f30\") " pod="kube-system/cilium-42drv" Sep 6 00:06:56.444615 kubelet[3036]: E0906 00:06:56.444525 3036 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-kzl77" podUID="3cf1f677-f2ec-44c1-a2e0-0637cad7beb6" Sep 6 00:06:56.448865 kubelet[3036]: I0906 00:06:56.448797 3036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41daa19-22bb-4972-8cd6-2cbff2b5b141" path="/var/lib/kubelet/pods/c41daa19-22bb-4972-8cd6-2cbff2b5b141/volumes" Sep 6 00:06:56.611606 env[1935]: time="2025-09-06T00:06:56.610198932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-42drv,Uid:50882c30-2ec7-427e-9213-1dbdfe6f2f30,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:56.636665 env[1935]: time="2025-09-06T00:06:56.636228545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:56.636665 env[1935]: time="2025-09-06T00:06:56.636381982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:56.636665 env[1935]: time="2025-09-06T00:06:56.636420035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:56.637299 env[1935]: time="2025-09-06T00:06:56.637136301Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b27eddbb96e59b2cb19a5e5d9b451c746129f517ebf6ba9966205ab30a3ebf6 pid=6118 runtime=io.containerd.runc.v2 Sep 6 00:06:56.715471 kubelet[3036]: E0906 00:06:56.715411 3036 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:56.733494 env[1935]: time="2025-09-06T00:06:56.733440357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-42drv,Uid:50882c30-2ec7-427e-9213-1dbdfe6f2f30,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b27eddbb96e59b2cb19a5e5d9b451c746129f517ebf6ba9966205ab30a3ebf6\"" Sep 6 00:06:56.740277 env[1935]: time="2025-09-06T00:06:56.740204841Z" level=info msg="CreateContainer within sandbox \"5b27eddbb96e59b2cb19a5e5d9b451c746129f517ebf6ba9966205ab30a3ebf6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:06:56.764486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601380974.mount: Deactivated successfully. Sep 6 00:06:56.777117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2102873074.mount: Deactivated successfully. Sep 6 00:06:56.789142 env[1935]: time="2025-09-06T00:06:56.789079891Z" level=info msg="CreateContainer within sandbox \"5b27eddbb96e59b2cb19a5e5d9b451c746129f517ebf6ba9966205ab30a3ebf6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2ef4f75336e904325ff4543107aa504bf4abcfcace17bcf442110c9e6ce9a9bb\"" Sep 6 00:06:56.792357 env[1935]: time="2025-09-06T00:06:56.792195605Z" level=info msg="StartContainer for \"2ef4f75336e904325ff4543107aa504bf4abcfcace17bcf442110c9e6ce9a9bb\"" Sep 6 00:06:56.893726 env[1935]: time="2025-09-06T00:06:56.893035330Z" level=info msg="StartContainer for \"2ef4f75336e904325ff4543107aa504bf4abcfcace17bcf442110c9e6ce9a9bb\" returns successfully" Sep 6 00:06:56.975670 env[1935]: time="2025-09-06T00:06:56.975604432Z" level=info msg="shim disconnected" id=2ef4f75336e904325ff4543107aa504bf4abcfcace17bcf442110c9e6ce9a9bb Sep 6 00:06:56.976028 env[1935]: time="2025-09-06T00:06:56.975993316Z" level=warning msg="cleaning up after shim disconnected" id=2ef4f75336e904325ff4543107aa504bf4abcfcace17bcf442110c9e6ce9a9bb namespace=k8s.io Sep 6 00:06:56.976175 env[1935]: time="2025-09-06T00:06:56.976147317Z" level=info msg="cleaning up dead shim" Sep 6 00:06:57.001430 env[1935]: time="2025-09-06T00:06:57.001372605Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6205 runtime=io.containerd.runc.v2\n" Sep 6 00:06:57.248018 env[1935]: time="2025-09-06T00:06:57.247959727Z" level=info msg="CreateContainer within sandbox \"5b27eddbb96e59b2cb19a5e5d9b451c746129f517ebf6ba9966205ab30a3ebf6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:06:57.286159 env[1935]: time="2025-09-06T00:06:57.286096603Z" level=info msg="CreateContainer within sandbox \"5b27eddbb96e59b2cb19a5e5d9b451c746129f517ebf6ba9966205ab30a3ebf6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"78755227792fcf5fe8e9571ec067486bd6e3e09be1dfed518619d6471cb6b848\"" Sep 6 00:06:57.288873 env[1935]: time="2025-09-06T00:06:57.287419871Z" level=info msg="StartContainer for \"78755227792fcf5fe8e9571ec067486bd6e3e09be1dfed518619d6471cb6b848\"" Sep 6 00:06:57.393869 env[1935]: time="2025-09-06T00:06:57.393806133Z" level=info msg="StartContainer for \"78755227792fcf5fe8e9571ec067486bd6e3e09be1dfed518619d6471cb6b848\" returns successfully" Sep 6 00:06:57.454456 env[1935]: time="2025-09-06T00:06:57.454395064Z" level=info msg="shim disconnected" id=78755227792fcf5fe8e9571ec067486bd6e3e09be1dfed518619d6471cb6b848 Sep 6 00:06:57.454943 env[1935]: time="2025-09-06T00:06:57.454896823Z" level=warning msg="cleaning up after shim disconnected" id=78755227792fcf5fe8e9571ec067486bd6e3e09be1dfed518619d6471cb6b848 namespace=k8s.io Sep 6 00:06:57.455082 env[1935]: time="2025-09-06T00:06:57.455054544Z" level=info msg="cleaning up dead shim" Sep 6 00:06:57.468535 env[1935]: time="2025-09-06T00:06:57.468480741Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6265 runtime=io.containerd.runc.v2\n" Sep 6 00:06:58.255967 env[1935]: time="2025-09-06T00:06:58.254327076Z" level=info msg="CreateContainer within sandbox \"5b27eddbb96e59b2cb19a5e5d9b451c746129f517ebf6ba9966205ab30a3ebf6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:06:58.283744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125570206.mount: Deactivated successfully. Sep 6 00:06:58.298805 env[1935]: time="2025-09-06T00:06:58.298626548Z" level=info msg="CreateContainer within sandbox \"5b27eddbb96e59b2cb19a5e5d9b451c746129f517ebf6ba9966205ab30a3ebf6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd6a7a07d620b7c6b9f925d9700fbdd0b410648a0844eede127220d100cfdd1e\"" Sep 6 00:06:58.301247 env[1935]: time="2025-09-06T00:06:58.299688172Z" level=info msg="StartContainer for \"fd6a7a07d620b7c6b9f925d9700fbdd0b410648a0844eede127220d100cfdd1e\"" Sep 6 00:06:58.437742 env[1935]: time="2025-09-06T00:06:58.437656228Z" level=info msg="StartContainer for \"fd6a7a07d620b7c6b9f925d9700fbdd0b410648a0844eede127220d100cfdd1e\" returns successfully" Sep 6 00:06:58.444821 kubelet[3036]: E0906 00:06:58.444293 3036 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-kzl77" podUID="3cf1f677-f2ec-44c1-a2e0-0637cad7beb6" Sep 6 00:06:58.488481 env[1935]: time="2025-09-06T00:06:58.488406292Z" level=info msg="shim disconnected" id=fd6a7a07d620b7c6b9f925d9700fbdd0b410648a0844eede127220d100cfdd1e Sep 6 00:06:58.488481 env[1935]: time="2025-09-06T00:06:58.488477694Z" level=warning msg="cleaning up after shim disconnected" id=fd6a7a07d620b7c6b9f925d9700fbdd0b410648a0844eede127220d100cfdd1e namespace=k8s.io Sep 6 00:06:58.488909 env[1935]: time="2025-09-06T00:06:58.488500122Z" level=info msg="cleaning up dead shim" Sep 6 00:06:58.517012 env[1935]: time="2025-09-06T00:06:58.516270763Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6324 runtime=io.containerd.runc.v2\n" Sep 6 00:06:58.700096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd6a7a07d620b7c6b9f925d9700fbdd0b410648a0844eede127220d100cfdd1e-rootfs.mount: Deactivated successfully. Sep 6 00:06:59.255864 env[1935]: time="2025-09-06T00:06:59.255798976Z" level=info msg="CreateContainer within sandbox \"5b27eddbb96e59b2cb19a5e5d9b451c746129f517ebf6ba9966205ab30a3ebf6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:06:59.290453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424400472.mount: Deactivated successfully. Sep 6 00:06:59.298950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3876662556.mount: Deactivated successfully. Sep 6 00:06:59.305145 env[1935]: time="2025-09-06T00:06:59.305056473Z" level=info msg="CreateContainer within sandbox \"5b27eddbb96e59b2cb19a5e5d9b451c746129f517ebf6ba9966205ab30a3ebf6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"782bf11dee34c0c87619bf67a5f6211b849a915cac792eef8f0cc20e3f6232e6\"" Sep 6 00:06:59.306293 env[1935]: time="2025-09-06T00:06:59.306188719Z" level=info msg="StartContainer for \"782bf11dee34c0c87619bf67a5f6211b849a915cac792eef8f0cc20e3f6232e6\"" Sep 6 00:06:59.432402 env[1935]: time="2025-09-06T00:06:59.432311042Z" level=info msg="StartContainer for \"782bf11dee34c0c87619bf67a5f6211b849a915cac792eef8f0cc20e3f6232e6\" returns successfully" Sep 6 00:06:59.473336 env[1935]: time="2025-09-06T00:06:59.473202582Z" level=info msg="shim disconnected" id=782bf11dee34c0c87619bf67a5f6211b849a915cac792eef8f0cc20e3f6232e6 Sep 6 00:06:59.473720 env[1935]: time="2025-09-06T00:06:59.473673248Z" level=warning msg="cleaning up after shim disconnected" id=782bf11dee34c0c87619bf67a5f6211b849a915cac792eef8f0cc20e3f6232e6 namespace=k8s.io Sep 6 00:06:59.473853 env[1935]: time="2025-09-06T00:06:59.473825581Z" level=info msg="cleaning up dead shim" Sep 6 00:06:59.487727 env[1935]: time="2025-09-06T00:06:59.487671372Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6382 runtime=io.containerd.runc.v2\n" Sep 6 00:06:59.700405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-782bf11dee34c0c87619bf67a5f6211b849a915cac792eef8f0cc20e3f6232e6-rootfs.mount: Deactivated successfully. Sep 6 00:07:00.263449 env[1935]: time="2025-09-06T00:07:00.263188809Z" level=info msg="CreateContainer within sandbox \"5b27eddbb96e59b2cb19a5e5d9b451c746129f517ebf6ba9966205ab30a3ebf6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:07:00.303156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2777340618.mount: Deactivated successfully. Sep 6 00:07:00.312827 env[1935]: time="2025-09-06T00:07:00.312744986Z" level=info msg="CreateContainer within sandbox \"5b27eddbb96e59b2cb19a5e5d9b451c746129f517ebf6ba9966205ab30a3ebf6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a7b926851d9871271cd238d7b286f5142cae9b1b775ecadd7e6fc2206e8e0e6e\"" Sep 6 00:07:00.316317 env[1935]: time="2025-09-06T00:07:00.315827747Z" level=info msg="StartContainer for \"a7b926851d9871271cd238d7b286f5142cae9b1b775ecadd7e6fc2206e8e0e6e\"" Sep 6 00:07:00.451301 kubelet[3036]: E0906 00:07:00.444113 3036 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-kzl77" podUID="3cf1f677-f2ec-44c1-a2e0-0637cad7beb6" Sep 6 00:07:00.458745 env[1935]: time="2025-09-06T00:07:00.458660128Z" level=info msg="StartContainer for \"a7b926851d9871271cd238d7b286f5142cae9b1b775ecadd7e6fc2206e8e0e6e\" returns successfully" Sep 6 00:07:00.540885 kubelet[3036]: I0906 00:07:00.539397 3036 setters.go:600] "Node became not ready" node="ip-172-31-29-77" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:07:00Z","lastTransitionTime":"2025-09-06T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:07:01.234334 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 6 00:07:02.968119 systemd[1]: run-containerd-runc-k8s.io-a7b926851d9871271cd238d7b286f5142cae9b1b775ecadd7e6fc2206e8e0e6e-runc.2UWTm0.mount: Deactivated successfully. Sep 6 00:07:05.637636 (udev-worker)[6949]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:07:05.651328 (udev-worker)[6950]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:07:05.691897 systemd-networkd[1599]: lxc_health: Link UP Sep 6 00:07:05.720263 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:07:05.723459 systemd-networkd[1599]: lxc_health: Gained carrier Sep 6 00:07:06.377159 env[1935]: time="2025-09-06T00:07:06.377089210Z" level=info msg="StopPodSandbox for \"7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9\"" Sep 6 00:07:06.377832 env[1935]: time="2025-09-06T00:07:06.377263909Z" level=info msg="TearDown network for sandbox \"7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9\" successfully" Sep 6 00:07:06.377832 env[1935]: time="2025-09-06T00:07:06.377323994Z" level=info msg="StopPodSandbox for \"7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9\" returns successfully" Sep 6 00:07:06.378803 env[1935]: time="2025-09-06T00:07:06.378736464Z" level=info msg="RemovePodSandbox for \"7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9\"" Sep 6 00:07:06.378971 env[1935]: time="2025-09-06T00:07:06.378829946Z" level=info msg="Forcibly stopping sandbox \"7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9\"" Sep 6 00:07:06.379042 env[1935]: time="2025-09-06T00:07:06.379015433Z" level=info msg="TearDown network for sandbox \"7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9\" successfully" Sep 6 00:07:06.387516 env[1935]: time="2025-09-06T00:07:06.387401407Z" level=info msg="RemovePodSandbox \"7224048f772b73937d870ed60fca9e970bcf032e207933befb6d4170634519a9\" returns successfully" Sep 6 00:07:06.388358 env[1935]: time="2025-09-06T00:07:06.388297221Z" level=info msg="StopPodSandbox for \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\"" Sep 6 00:07:06.388550 env[1935]: time="2025-09-06T00:07:06.388472268Z" level=info msg="TearDown network for sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" successfully" Sep 6 00:07:06.388634 env[1935]: time="2025-09-06T00:07:06.388543393Z" level=info msg="StopPodSandbox for \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" returns successfully" Sep 6 00:07:06.391266 env[1935]: time="2025-09-06T00:07:06.389136958Z" level=info msg="RemovePodSandbox for \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\"" Sep 6 00:07:06.391266 env[1935]: time="2025-09-06T00:07:06.389191187Z" level=info msg="Forcibly stopping sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\"" Sep 6 00:07:06.391266 env[1935]: time="2025-09-06T00:07:06.389351618Z" level=info msg="TearDown network for sandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" successfully" Sep 6 00:07:06.396119 env[1935]: time="2025-09-06T00:07:06.396055813Z" level=info msg="RemovePodSandbox \"5b76ca8ff3c890ac1b5de02a9f6f8be3ba430bd8ba1f47f5c2f1344a5dd213a6\" returns successfully" Sep 6 00:07:06.397212 env[1935]: time="2025-09-06T00:07:06.397147831Z" level=info msg="StopPodSandbox for \"6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9\"" Sep 6 00:07:06.397393 env[1935]: time="2025-09-06T00:07:06.397318581Z" level=info msg="TearDown network for sandbox \"6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9\" successfully" Sep 6 00:07:06.397393 env[1935]: time="2025-09-06T00:07:06.397379722Z" level=info msg="StopPodSandbox for \"6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9\" returns successfully" Sep 6 00:07:06.398141 env[1935]: time="2025-09-06T00:07:06.398061081Z" level=info msg="RemovePodSandbox for \"6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9\"" Sep 6 00:07:06.398292 env[1935]: time="2025-09-06T00:07:06.398144290Z" level=info msg="Forcibly stopping sandbox \"6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9\"" Sep 6 00:07:06.398382 env[1935]: time="2025-09-06T00:07:06.398316529Z" level=info msg="TearDown network for sandbox \"6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9\" successfully" Sep 6 00:07:06.405697 env[1935]: time="2025-09-06T00:07:06.405613570Z" level=info msg="RemovePodSandbox \"6a9a5b9eca6520d877daf8ed0a0d2abb8981aaad3f8cd3962432e76f420859d9\" returns successfully" Sep 6 00:07:06.660642 kubelet[3036]: I0906 00:07:06.660443 3036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-42drv" podStartSLOduration=10.66039845 podStartE2EDuration="10.66039845s" podCreationTimestamp="2025-09-06 00:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:07:01.301115348 +0000 UTC m=+175.186625710" watchObservedRunningTime="2025-09-06 00:07:06.66039845 +0000 UTC m=+180.545908740" Sep 6 00:07:07.508463 systemd-networkd[1599]: lxc_health: Gained IPv6LL Sep 6 00:07:07.557478 systemd[1]: run-containerd-runc-k8s.io-a7b926851d9871271cd238d7b286f5142cae9b1b775ecadd7e6fc2206e8e0e6e-runc.hzhYgn.mount: Deactivated successfully. Sep 6 00:07:12.416582 sshd[5958]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:12.422812 systemd[1]: sshd@26-172.31.29.77:22-147.75.109.163:39462.service: Deactivated successfully. Sep 6 00:07:12.425524 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 00:07:12.428061 systemd-logind[1921]: Session 27 logged out. Waiting for processes to exit. Sep 6 00:07:12.429988 systemd-logind[1921]: Removed session 27. Sep 6 00:07:24.411313 update_engine[1923]: I0906 00:07:24.410837 1923 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 6 00:07:24.411313 update_engine[1923]: I0906 00:07:24.410892 1923 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 6 00:07:24.412001 update_engine[1923]: I0906 00:07:24.411432 1923 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 6 00:07:24.412365 update_engine[1923]: I0906 00:07:24.412318 1923 omaha_request_params.cc:62] Current group set to lts Sep 6 00:07:24.412595 update_engine[1923]: I0906 00:07:24.412555 1923 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 6 00:07:24.412595 update_engine[1923]: I0906 00:07:24.412580 1923 update_attempter.cc:643] Scheduling an action processor start. Sep 6 00:07:24.412719 update_engine[1923]: I0906 00:07:24.412611 1923 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 6 00:07:24.412719 update_engine[1923]: I0906 00:07:24.412660 1923 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 6 00:07:24.414091 update_engine[1923]: I0906 00:07:24.413861 1923 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 6 00:07:24.414091 update_engine[1923]: I0906 00:07:24.414078 1923 omaha_request_action.cc:271] Request: Sep 6 00:07:24.414091 update_engine[1923]: Sep 6 00:07:24.414091 update_engine[1923]: Sep 6 00:07:24.414091 update_engine[1923]: Sep 6 00:07:24.414091 update_engine[1923]: Sep 6 00:07:24.414091 update_engine[1923]: Sep 6 00:07:24.414091 update_engine[1923]: Sep 6 00:07:24.414091 update_engine[1923]: Sep 6 00:07:24.414091 update_engine[1923]: Sep 6 00:07:24.414091 update_engine[1923]: I0906 00:07:24.414093 1923 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:07:24.414966 locksmithd[1993]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 6 00:07:24.420480 update_engine[1923]: I0906 00:07:24.420421 1923 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:07:24.420835 update_engine[1923]: I0906 00:07:24.420780 1923 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:07:24.430941 update_engine[1923]: E0906 00:07:24.430878 1923 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:07:24.431124 update_engine[1923]: I0906 00:07:24.431041 1923 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 6 00:07:34.413908 update_engine[1923]: I0906 00:07:34.413838 1923 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:07:34.414839 update_engine[1923]: I0906 00:07:34.414150 1923 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:07:34.414839 update_engine[1923]: I0906 00:07:34.414415 1923 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:07:34.414987 update_engine[1923]: E0906 00:07:34.414852 1923 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:07:34.414987 update_engine[1923]: I0906 00:07:34.414974 1923 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 6 00:07:44.408861 update_engine[1923]: I0906 00:07:44.408794 1923 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:07:44.409527 update_engine[1923]: I0906 00:07:44.409107 1923 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:07:44.409527 update_engine[1923]: I0906 00:07:44.409396 1923 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:07:44.409896 update_engine[1923]: E0906 00:07:44.409852 1923 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:07:44.410011 update_engine[1923]: I0906 00:07:44.409984 1923 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 6 00:07:54.413505 update_engine[1923]: I0906 00:07:54.413362 1923 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:07:54.414331 update_engine[1923]: I0906 00:07:54.413693 1923 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:07:54.414331 update_engine[1923]: I0906 00:07:54.413964 1923 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:07:54.414537 update_engine[1923]: E0906 00:07:54.414453 1923 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:07:54.414616 update_engine[1923]: I0906 00:07:54.414573 1923 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 6 00:07:54.414616 update_engine[1923]: I0906 00:07:54.414587 1923 omaha_request_action.cc:621] Omaha request response: Sep 6 00:07:54.414733 update_engine[1923]: E0906 00:07:54.414690 1923 omaha_request_action.cc:640] Omaha request network transfer failed. Sep 6 00:07:54.414733 update_engine[1923]: I0906 00:07:54.414716 1923 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 6 00:07:54.414733 update_engine[1923]: I0906 00:07:54.414726 1923 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 00:07:54.414931 update_engine[1923]: I0906 00:07:54.414734 1923 update_attempter.cc:306] Processing Done. Sep 6 00:07:54.414931 update_engine[1923]: E0906 00:07:54.414753 1923 update_attempter.cc:619] Update failed. Sep 6 00:07:54.414931 update_engine[1923]: I0906 00:07:54.414764 1923 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 6 00:07:54.414931 update_engine[1923]: I0906 00:07:54.414772 1923 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 6 00:07:54.414931 update_engine[1923]: I0906 00:07:54.414783 1923 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 6 00:07:54.414931 update_engine[1923]: I0906 00:07:54.414888 1923 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 6 00:07:54.414931 update_engine[1923]: I0906 00:07:54.414922 1923 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 6 00:07:54.414931 update_engine[1923]: I0906 00:07:54.414932 1923 omaha_request_action.cc:271] Request: Sep 6 00:07:54.414931 update_engine[1923]: Sep 6 00:07:54.414931 update_engine[1923]: Sep 6 00:07:54.414931 update_engine[1923]: Sep 6 00:07:54.414931 update_engine[1923]: Sep 6 00:07:54.414931 update_engine[1923]: Sep 6 00:07:54.414931 update_engine[1923]: Sep 6 00:07:54.415903 update_engine[1923]: I0906 00:07:54.414943 1923 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:07:54.415903 update_engine[1923]: I0906 00:07:54.415172 1923 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:07:54.415903 update_engine[1923]: I0906 00:07:54.415421 1923 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:07:54.415903 update_engine[1923]: E0906 00:07:54.415892 1923 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:07:54.416128 update_engine[1923]: I0906 00:07:54.416005 1923 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 6 00:07:54.416128 update_engine[1923]: I0906 00:07:54.416018 1923 omaha_request_action.cc:621] Omaha request response: Sep 6 00:07:54.416128 update_engine[1923]: I0906 00:07:54.416029 1923 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 00:07:54.416128 update_engine[1923]: I0906 00:07:54.416038 1923 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 00:07:54.416128 update_engine[1923]: I0906 00:07:54.416046 1923 update_attempter.cc:306] Processing Done. Sep 6 00:07:54.416128 update_engine[1923]: I0906 00:07:54.416057 1923 update_attempter.cc:310] Error event sent. Sep 6 00:07:54.416128 update_engine[1923]: I0906 00:07:54.416071 1923 update_check_scheduler.cc:74] Next update check in 41m57s Sep 6 00:07:54.416689 locksmithd[1993]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 6 00:07:54.416689 locksmithd[1993]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 6 00:07:58.922759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aface63cf9cedcd2b9180d7edb92ad5f20cf7fdd02389c289f02539564ad1e1-rootfs.mount: Deactivated successfully. Sep 6 00:07:58.937018 env[1935]: time="2025-09-06T00:07:58.936943908Z" level=info msg="shim disconnected" id=6aface63cf9cedcd2b9180d7edb92ad5f20cf7fdd02389c289f02539564ad1e1 Sep 6 00:07:58.937746 env[1935]: time="2025-09-06T00:07:58.937016293Z" level=warning msg="cleaning up after shim disconnected" id=6aface63cf9cedcd2b9180d7edb92ad5f20cf7fdd02389c289f02539564ad1e1 namespace=k8s.io Sep 6 00:07:58.937746 env[1935]: time="2025-09-06T00:07:58.937044902Z" level=info msg="cleaning up dead shim" Sep 6 00:07:58.950434 env[1935]: time="2025-09-06T00:07:58.950361500Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7072 runtime=io.containerd.runc.v2\n" Sep 6 00:07:59.427613 kubelet[3036]: I0906 00:07:59.427575 3036 scope.go:117] "RemoveContainer" containerID="6aface63cf9cedcd2b9180d7edb92ad5f20cf7fdd02389c289f02539564ad1e1" Sep 6 00:07:59.432922 env[1935]: time="2025-09-06T00:07:59.432860827Z" level=info msg="CreateContainer within sandbox \"cb76bcfa78d094fbbf90587d926e7a828a044518605ad21f843dbfe3435ab94d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 6 00:07:59.454389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2510965073.mount: Deactivated successfully. Sep 6 00:07:59.469474 env[1935]: time="2025-09-06T00:07:59.469412862Z" level=info msg="CreateContainer within sandbox \"cb76bcfa78d094fbbf90587d926e7a828a044518605ad21f843dbfe3435ab94d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e8032e483c9160cdd4b740c939aaadbe25be8e4ed1793896afccc3bbb13122dc\"" Sep 6 00:07:59.470560 env[1935]: time="2025-09-06T00:07:59.470493080Z" level=info msg="StartContainer for \"e8032e483c9160cdd4b740c939aaadbe25be8e4ed1793896afccc3bbb13122dc\"" Sep 6 00:07:59.595413 env[1935]: time="2025-09-06T00:07:59.595347573Z" level=info msg="StartContainer for \"e8032e483c9160cdd4b740c939aaadbe25be8e4ed1793896afccc3bbb13122dc\" returns successfully" Sep 6 00:08:01.200655 kubelet[3036]: E0906 00:08:01.200582 3036 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-77?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 6 00:08:03.531489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d4a752771f8e6620bcb58636e23a36526972ae7eac2c4537de19d08a3e25cb2-rootfs.mount: Deactivated successfully. Sep 6 00:08:03.548329 env[1935]: time="2025-09-06T00:08:03.548193234Z" level=info msg="shim disconnected" id=0d4a752771f8e6620bcb58636e23a36526972ae7eac2c4537de19d08a3e25cb2 Sep 6 00:08:03.549029 env[1935]: time="2025-09-06T00:08:03.548330074Z" level=warning msg="cleaning up after shim disconnected" id=0d4a752771f8e6620bcb58636e23a36526972ae7eac2c4537de19d08a3e25cb2 namespace=k8s.io Sep 6 00:08:03.549029 env[1935]: time="2025-09-06T00:08:03.548352994Z" level=info msg="cleaning up dead shim" Sep 6 00:08:03.561727 env[1935]: time="2025-09-06T00:08:03.561664378Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:08:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7134 runtime=io.containerd.runc.v2\n" Sep 6 00:08:04.447981 kubelet[3036]: I0906 00:08:04.447924 3036 scope.go:117] "RemoveContainer" containerID="0d4a752771f8e6620bcb58636e23a36526972ae7eac2c4537de19d08a3e25cb2" Sep 6 00:08:04.450940 env[1935]: time="2025-09-06T00:08:04.450865723Z" level=info msg="CreateContainer within sandbox \"5aecf4bd091638628084e29627cbade9d39311dcd8522100154e801619135f97\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 6 00:08:04.473746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount969775387.mount: Deactivated successfully. Sep 6 00:08:04.485104 env[1935]: time="2025-09-06T00:08:04.485018629Z" level=info msg="CreateContainer within sandbox \"5aecf4bd091638628084e29627cbade9d39311dcd8522100154e801619135f97\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"249e2eca2199a86f9865669c56c37fa1208ffa816ec73b77c3d000f8a9cd9d4b\"" Sep 6 00:08:04.486182 env[1935]: time="2025-09-06T00:08:04.486138592Z" level=info msg="StartContainer for \"249e2eca2199a86f9865669c56c37fa1208ffa816ec73b77c3d000f8a9cd9d4b\"" Sep 6 00:08:04.619282 env[1935]: time="2025-09-06T00:08:04.619142398Z" level=info msg="StartContainer for \"249e2eca2199a86f9865669c56c37fa1208ffa816ec73b77c3d000f8a9cd9d4b\" returns successfully" Sep 6 00:08:11.201828 kubelet[3036]: E0906 00:08:11.201006 3036 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-77?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 6 00:08:19.949028 amazon-ssm-agent[1904]: 2025-09-06 00:08:19 WARN [MessageGatewayService] Reach the retry limit 5 for receive messages. Error: websocket: close 1006 (abnormal closure): unexpected EOF Sep 6 00:08:20.049438 amazon-ssm-agent[1904]: 2025-09-06 00:08:19 INFO [MessageGatewayService] Closing websocket channel connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-01ee6a987e2580dc5?role=subscribe&stream=input Sep 6 00:08:20.149452 amazon-ssm-agent[1904]: 2025-09-06 00:08:19 INFO [MessageGatewayService] Successfully closed websocket connection to: 52.94.177.19:443 Sep 6 00:08:20.249637 amazon-ssm-agent[1904]: 2025-09-06 00:08:19 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-01ee6a987e2580dc5?role=subscribe&stream=input Sep 6 00:08:20.350009 amazon-ssm-agent[1904]: 2025-09-06 00:08:20 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-01ee6a987e2580dc5?role=subscribe&stream=input