Sep 6 00:03:01.005290 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 6 00:03:01.005326 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 5 23:00:12 -00 2025 Sep 6 00:03:01.005349 kernel: efi: EFI v2.70 by EDK II Sep 6 00:03:01.005364 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x716fcf98 Sep 6 00:03:01.005378 kernel: ACPI: Early table checksum verification disabled Sep 6 00:03:01.005411 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 6 00:03:01.005431 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 6 00:03:01.005445 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 6 00:03:01.005460 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 6 00:03:01.005474 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 6 00:03:01.005493 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 6 00:03:01.005507 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 6 00:03:01.005521 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 6 00:03:01.005536 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 6 00:03:01.005553 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 6 00:03:01.005572 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 6 00:03:01.005587 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 6 00:03:01.005601 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 6 00:03:01.005616 kernel: printk: bootconsole [uart0] enabled Sep 6 00:03:01.005630 kernel: NUMA: Failed to initialise from firmware Sep 6 00:03:01.005646 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 6 00:03:01.005661 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Sep 6 00:03:01.005676 kernel: Zone ranges: Sep 6 00:03:01.005691 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 6 00:03:01.005705 kernel: DMA32 empty Sep 6 00:03:01.005720 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 6 00:03:01.005738 kernel: Movable zone start for each node Sep 6 00:03:01.005753 kernel: Early memory node ranges Sep 6 00:03:01.005768 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 6 00:03:01.005782 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 6 00:03:01.005797 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 6 00:03:01.005812 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 6 00:03:01.005826 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 6 00:03:01.005841 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 6 00:03:01.005856 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 6 00:03:01.005870 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 6 00:03:01.005885 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 6 00:03:01.005900 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 6 00:03:01.005918 kernel: psci: probing for conduit method from ACPI. Sep 6 00:03:01.005933 kernel: psci: PSCIv1.0 detected in firmware. Sep 6 00:03:01.005954 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 00:03:01.005970 kernel: psci: Trusted OS migration not required Sep 6 00:03:01.005985 kernel: psci: SMC Calling Convention v1.1 Sep 6 00:03:01.006004 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 6 00:03:01.006020 kernel: ACPI: SRAT not present Sep 6 00:03:01.006036 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 6 00:03:01.006052 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 6 00:03:01.006067 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 6 00:03:01.006083 kernel: Detected PIPT I-cache on CPU0 Sep 6 00:03:01.006098 kernel: CPU features: detected: GIC system register CPU interface Sep 6 00:03:01.006114 kernel: CPU features: detected: Spectre-v2 Sep 6 00:03:01.006129 kernel: CPU features: detected: Spectre-v3a Sep 6 00:03:01.006144 kernel: CPU features: detected: Spectre-BHB Sep 6 00:03:01.006160 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 00:03:01.006179 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 00:03:01.006195 kernel: CPU features: detected: ARM erratum 1742098 Sep 6 00:03:01.006210 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 6 00:03:01.006225 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 6 00:03:01.006240 kernel: Policy zone: Normal Sep 6 00:03:01.006258 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:03:01.006275 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:03:01.006291 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:03:01.006306 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:03:01.006322 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:03:01.006341 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 6 00:03:01.006359 kernel: Memory: 3824460K/4030464K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 206004K reserved, 0K cma-reserved) Sep 6 00:03:01.006375 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:03:01.006417 kernel: trace event string verifier disabled Sep 6 00:03:01.006434 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 00:03:01.006450 kernel: rcu: RCU event tracing is enabled. Sep 6 00:03:01.006466 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:03:01.006482 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 00:03:01.006498 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:03:01.006514 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:03:01.006530 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:03:01.006545 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 00:03:01.006565 kernel: GICv3: 96 SPIs implemented Sep 6 00:03:01.006581 kernel: GICv3: 0 Extended SPIs implemented Sep 6 00:03:01.006596 kernel: GICv3: Distributor has no Range Selector support Sep 6 00:03:01.006611 kernel: Root IRQ handler: gic_handle_irq Sep 6 00:03:01.006627 kernel: GICv3: 16 PPIs implemented Sep 6 00:03:01.006642 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 6 00:03:01.006657 kernel: ACPI: SRAT not present Sep 6 00:03:01.006672 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 6 00:03:01.006688 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Sep 6 00:03:01.006737 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Sep 6 00:03:01.006759 kernel: GICv3: using LPI property table @0x00000004000b0000 Sep 6 00:03:01.006780 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 6 00:03:01.006796 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Sep 6 00:03:01.006812 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 6 00:03:01.006827 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 6 00:03:01.006843 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 6 00:03:01.006859 kernel: Console: colour dummy device 80x25 Sep 6 00:03:01.006875 kernel: printk: console [tty1] enabled Sep 6 00:03:01.006891 kernel: ACPI: Core revision 20210730 Sep 6 00:03:01.006908 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 6 00:03:01.006924 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:03:01.006944 kernel: LSM: Security Framework initializing Sep 6 00:03:01.006960 kernel: SELinux: Initializing. Sep 6 00:03:01.006976 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:03:01.006992 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:03:01.007008 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:03:01.007024 kernel: Platform MSI: ITS@0x10080000 domain created Sep 6 00:03:01.007039 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 6 00:03:01.007055 kernel: Remapping and enabling EFI services. Sep 6 00:03:01.007071 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:03:01.007087 kernel: Detected PIPT I-cache on CPU1 Sep 6 00:03:01.007106 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 6 00:03:01.007122 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Sep 6 00:03:01.007138 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 6 00:03:01.007154 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:03:01.007170 kernel: SMP: Total of 2 processors activated. Sep 6 00:03:01.007186 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 00:03:01.007201 kernel: CPU features: detected: 32-bit EL1 Support Sep 6 00:03:01.007217 kernel: CPU features: detected: CRC32 instructions Sep 6 00:03:01.007233 kernel: CPU: All CPU(s) started at EL1 Sep 6 00:03:01.007253 kernel: alternatives: patching kernel code Sep 6 00:03:01.007269 kernel: devtmpfs: initialized Sep 6 00:03:01.007295 kernel: KASLR disabled due to lack of seed Sep 6 00:03:01.007315 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:03:01.007331 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:03:01.007348 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:03:01.007364 kernel: SMBIOS 3.0.0 present. Sep 6 00:03:01.007380 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 6 00:03:01.007429 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:03:01.007448 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 00:03:01.007466 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 00:03:01.007489 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 00:03:01.007506 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:03:01.007523 kernel: audit: type=2000 audit(0.294:1): state=initialized audit_enabled=0 res=1 Sep 6 00:03:01.007540 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:03:01.007556 kernel: cpuidle: using governor menu Sep 6 00:03:01.007577 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 00:03:01.007594 kernel: ASID allocator initialised with 32768 entries Sep 6 00:03:01.007610 kernel: ACPI: bus type PCI registered Sep 6 00:03:01.007627 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:03:01.007643 kernel: Serial: AMBA PL011 UART driver Sep 6 00:03:01.007660 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:03:01.007677 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 00:03:01.007694 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:03:01.007711 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 00:03:01.007731 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:03:01.007748 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 00:03:01.007765 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:03:01.007781 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:03:01.007797 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:03:01.007813 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:03:01.007830 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:03:01.007847 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:03:01.007863 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:03:01.007880 kernel: ACPI: Interpreter enabled Sep 6 00:03:01.007900 kernel: ACPI: Using GIC for interrupt routing Sep 6 00:03:01.007917 kernel: ACPI: MCFG table detected, 1 entries Sep 6 00:03:01.007933 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 6 00:03:01.013544 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:03:01.013775 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 00:03:01.013965 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 00:03:01.014160 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 6 00:03:01.014362 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 6 00:03:01.014420 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 6 00:03:01.014441 kernel: acpiphp: Slot [1] registered Sep 6 00:03:01.014458 kernel: acpiphp: Slot [2] registered Sep 6 00:03:01.014475 kernel: acpiphp: Slot [3] registered Sep 6 00:03:01.014492 kernel: acpiphp: Slot [4] registered Sep 6 00:03:01.014509 kernel: acpiphp: Slot [5] registered Sep 6 00:03:01.014525 kernel: acpiphp: Slot [6] registered Sep 6 00:03:01.014541 kernel: acpiphp: Slot [7] registered Sep 6 00:03:01.014563 kernel: acpiphp: Slot [8] registered Sep 6 00:03:01.014580 kernel: acpiphp: Slot [9] registered Sep 6 00:03:01.014596 kernel: acpiphp: Slot [10] registered Sep 6 00:03:01.014613 kernel: acpiphp: Slot [11] registered Sep 6 00:03:01.014629 kernel: acpiphp: Slot [12] registered Sep 6 00:03:01.014646 kernel: acpiphp: Slot [13] registered Sep 6 00:03:01.014662 kernel: acpiphp: Slot [14] registered Sep 6 00:03:01.014678 kernel: acpiphp: Slot [15] registered Sep 6 00:03:01.014695 kernel: acpiphp: Slot [16] registered Sep 6 00:03:01.014715 kernel: acpiphp: Slot [17] registered Sep 6 00:03:01.014732 kernel: acpiphp: Slot [18] registered Sep 6 00:03:01.014748 kernel: acpiphp: Slot [19] registered Sep 6 00:03:01.014765 kernel: acpiphp: Slot [20] registered Sep 6 00:03:01.014781 kernel: acpiphp: Slot [21] registered Sep 6 00:03:01.014797 kernel: acpiphp: Slot [22] registered Sep 6 00:03:01.014813 kernel: acpiphp: Slot [23] registered Sep 6 00:03:01.014830 kernel: acpiphp: Slot [24] registered Sep 6 00:03:01.014846 kernel: acpiphp: Slot [25] registered Sep 6 00:03:01.014862 kernel: acpiphp: Slot [26] registered Sep 6 00:03:01.014882 kernel: acpiphp: Slot [27] registered Sep 6 00:03:01.014899 kernel: acpiphp: Slot [28] registered Sep 6 00:03:01.014915 kernel: acpiphp: Slot [29] registered Sep 6 00:03:01.014932 kernel: acpiphp: Slot [30] registered Sep 6 00:03:01.014948 kernel: acpiphp: Slot [31] registered Sep 6 00:03:01.014964 kernel: PCI host bridge to bus 0000:00 Sep 6 00:03:01.015162 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 6 00:03:01.015337 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 6 00:03:01.015546 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 6 00:03:01.015726 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 6 00:03:01.015945 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 6 00:03:01.016173 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 6 00:03:01.016375 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 6 00:03:01.016619 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 6 00:03:01.016829 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 6 00:03:01.018611 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 6 00:03:01.018836 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 6 00:03:01.019029 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 6 00:03:01.019221 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 6 00:03:01.019433 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 6 00:03:01.019637 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 6 00:03:01.019842 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 6 00:03:01.020040 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 6 00:03:01.020238 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 6 00:03:01.020474 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 6 00:03:01.020684 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 6 00:03:01.020865 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 6 00:03:01.021048 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 6 00:03:01.021237 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 6 00:03:01.021261 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 6 00:03:01.021278 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 6 00:03:01.021296 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 6 00:03:01.021313 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 6 00:03:01.021330 kernel: iommu: Default domain type: Translated Sep 6 00:03:01.021347 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 00:03:01.021364 kernel: vgaarb: loaded Sep 6 00:03:01.021381 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:03:01.023478 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:03:01.023504 kernel: PTP clock support registered Sep 6 00:03:01.023522 kernel: Registered efivars operations Sep 6 00:03:01.023539 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 00:03:01.023556 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:03:01.023573 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:03:01.023589 kernel: pnp: PnP ACPI init Sep 6 00:03:01.023834 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 6 00:03:01.023860 kernel: pnp: PnP ACPI: found 1 devices Sep 6 00:03:01.023882 kernel: NET: Registered PF_INET protocol family Sep 6 00:03:01.023900 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:03:01.023917 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:03:01.023934 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:03:01.023950 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:03:01.023967 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 00:03:01.023985 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:03:01.024002 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:03:01.024022 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:03:01.024039 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:03:01.024056 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:03:01.024072 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 6 00:03:01.024089 kernel: kvm [1]: HYP mode not available Sep 6 00:03:01.024106 kernel: Initialise system trusted keyrings Sep 6 00:03:01.024123 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:03:01.024139 kernel: Key type asymmetric registered Sep 6 00:03:01.024156 kernel: Asymmetric key parser 'x509' registered Sep 6 00:03:01.024176 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:03:01.024193 kernel: io scheduler mq-deadline registered Sep 6 00:03:01.024209 kernel: io scheduler kyber registered Sep 6 00:03:01.024226 kernel: io scheduler bfq registered Sep 6 00:03:01.024514 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 6 00:03:01.024542 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 6 00:03:01.024559 kernel: ACPI: button: Power Button [PWRB] Sep 6 00:03:01.024576 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 6 00:03:01.024593 kernel: ACPI: button: Sleep Button [SLPB] Sep 6 00:03:01.024616 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:03:01.024634 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 6 00:03:01.024839 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 6 00:03:01.024865 kernel: printk: console [ttyS0] disabled Sep 6 00:03:01.026702 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 6 00:03:01.026742 kernel: printk: console [ttyS0] enabled Sep 6 00:03:01.026759 kernel: printk: bootconsole [uart0] disabled Sep 6 00:03:01.026776 kernel: thunder_xcv, ver 1.0 Sep 6 00:03:01.026793 kernel: thunder_bgx, ver 1.0 Sep 6 00:03:01.026818 kernel: nicpf, ver 1.0 Sep 6 00:03:01.026834 kernel: nicvf, ver 1.0 Sep 6 00:03:01.027097 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 00:03:01.027283 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T00:03:00 UTC (1757116980) Sep 6 00:03:01.027307 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 00:03:01.027324 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:03:01.027341 kernel: Segment Routing with IPv6 Sep 6 00:03:01.027358 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:03:01.027379 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:03:01.027417 kernel: Key type dns_resolver registered Sep 6 00:03:01.027437 kernel: registered taskstats version 1 Sep 6 00:03:01.027454 kernel: Loading compiled-in X.509 certificates Sep 6 00:03:01.027471 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 72ab5ba99c2368429c7a4d04fccfc5a39dd84386' Sep 6 00:03:01.027488 kernel: Key type .fscrypt registered Sep 6 00:03:01.027504 kernel: Key type fscrypt-provisioning registered Sep 6 00:03:01.027521 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:03:01.027537 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:03:01.027559 kernel: ima: No architecture policies found Sep 6 00:03:01.027576 kernel: clk: Disabling unused clocks Sep 6 00:03:01.027593 kernel: Freeing unused kernel memory: 36416K Sep 6 00:03:01.027609 kernel: Run /init as init process Sep 6 00:03:01.027626 kernel: with arguments: Sep 6 00:03:01.027642 kernel: /init Sep 6 00:03:01.027658 kernel: with environment: Sep 6 00:03:01.027674 kernel: HOME=/ Sep 6 00:03:01.027691 kernel: TERM=linux Sep 6 00:03:01.027711 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:03:01.027732 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:03:01.027754 systemd[1]: Detected virtualization amazon. Sep 6 00:03:01.027773 systemd[1]: Detected architecture arm64. Sep 6 00:03:01.027790 systemd[1]: Running in initrd. Sep 6 00:03:01.027808 systemd[1]: No hostname configured, using default hostname. Sep 6 00:03:01.027825 systemd[1]: Hostname set to . Sep 6 00:03:01.027848 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:03:01.027866 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:03:01.027884 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:03:01.027901 systemd[1]: Reached target cryptsetup.target. Sep 6 00:03:01.027919 systemd[1]: Reached target paths.target. Sep 6 00:03:01.027936 systemd[1]: Reached target slices.target. Sep 6 00:03:01.027954 systemd[1]: Reached target swap.target. Sep 6 00:03:01.027972 systemd[1]: Reached target timers.target. Sep 6 00:03:01.027994 systemd[1]: Listening on iscsid.socket. Sep 6 00:03:01.028012 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:03:01.028030 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:03:01.028048 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:03:01.028066 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:03:01.028084 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:03:01.028102 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:03:01.028120 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:03:01.028138 systemd[1]: Reached target sockets.target. Sep 6 00:03:01.028160 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:03:01.028178 systemd[1]: Finished network-cleanup.service. Sep 6 00:03:01.028196 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:03:01.028214 systemd[1]: Starting systemd-journald.service... Sep 6 00:03:01.028232 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:03:01.028250 systemd[1]: Starting systemd-resolved.service... Sep 6 00:03:01.028268 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:03:01.028286 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:03:01.030472 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:03:01.030492 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:03:01.030511 kernel: audit: type=1130 audit(1757116980.997:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.030529 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:03:01.030547 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:03:01.030569 systemd-journald[309]: Journal started Sep 6 00:03:01.030663 systemd-journald[309]: Runtime Journal (/run/log/journal/ec244f4e17d53489808463d554cdd010) is 8.0M, max 75.4M, 67.4M free. Sep 6 00:03:00.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.972748 systemd-modules-load[310]: Inserted module 'overlay' Sep 6 00:03:01.061582 systemd[1]: Started systemd-journald.service. Sep 6 00:03:01.061649 kernel: audit: type=1130 audit(1757116981.051:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.069809 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:03:01.072723 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:03:01.079256 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:03:01.089990 systemd-resolved[311]: Positive Trust Anchors: Sep 6 00:03:01.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.100468 kernel: audit: type=1130 audit(1757116981.068:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.100588 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:03:01.103811 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:03:01.122177 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:03:01.127648 systemd-modules-load[310]: Inserted module 'br_netfilter' Sep 6 00:03:01.129704 kernel: Bridge firewalling registered Sep 6 00:03:01.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.149427 kernel: audit: type=1130 audit(1757116981.073:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.149493 kernel: SCSI subsystem initialized Sep 6 00:03:01.172361 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:03:01.172463 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:03:01.172494 dracut-cmdline[326]: dracut-dracut-053 Sep 6 00:03:01.178439 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:03:01.183202 systemd-modules-load[310]: Inserted module 'dm_multipath' Sep 6 00:03:01.186633 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:03:01.191223 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:03:01.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.210418 kernel: audit: type=1130 audit(1757116981.201:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.211858 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:03:01.230746 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:03:01.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.242423 kernel: audit: type=1130 audit(1757116981.233:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.343427 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:03:01.364429 kernel: iscsi: registered transport (tcp) Sep 6 00:03:01.391672 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:03:01.391752 kernel: QLogic iSCSI HBA Driver Sep 6 00:03:01.543425 kernel: random: crng init done Sep 6 00:03:01.543610 systemd-resolved[311]: Defaulting to hostname 'linux'. Sep 6 00:03:01.547590 systemd[1]: Started systemd-resolved.service. Sep 6 00:03:01.551055 systemd[1]: Reached target nss-lookup.target. Sep 6 00:03:01.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.562443 kernel: audit: type=1130 audit(1757116981.549:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.576559 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:03:01.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.581459 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:03:01.591099 kernel: audit: type=1130 audit(1757116981.578:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:01.648451 kernel: raid6: neonx8 gen() 6424 MB/s Sep 6 00:03:01.666426 kernel: raid6: neonx8 xor() 4763 MB/s Sep 6 00:03:01.684426 kernel: raid6: neonx4 gen() 6569 MB/s Sep 6 00:03:01.702426 kernel: raid6: neonx4 xor() 4955 MB/s Sep 6 00:03:01.720427 kernel: raid6: neonx2 gen() 5796 MB/s Sep 6 00:03:01.738425 kernel: raid6: neonx2 xor() 4558 MB/s Sep 6 00:03:01.756426 kernel: raid6: neonx1 gen() 4489 MB/s Sep 6 00:03:01.774425 kernel: raid6: neonx1 xor() 3686 MB/s Sep 6 00:03:01.792426 kernel: raid6: int64x8 gen() 3443 MB/s Sep 6 00:03:01.810426 kernel: raid6: int64x8 xor() 2092 MB/s Sep 6 00:03:01.828425 kernel: raid6: int64x4 gen() 3851 MB/s Sep 6 00:03:01.846425 kernel: raid6: int64x4 xor() 2200 MB/s Sep 6 00:03:01.864425 kernel: raid6: int64x2 gen() 3619 MB/s Sep 6 00:03:01.882425 kernel: raid6: int64x2 xor() 1953 MB/s Sep 6 00:03:01.900427 kernel: raid6: int64x1 gen() 2768 MB/s Sep 6 00:03:01.919923 kernel: raid6: int64x1 xor() 1453 MB/s Sep 6 00:03:01.919962 kernel: raid6: using algorithm neonx4 gen() 6569 MB/s Sep 6 00:03:01.919987 kernel: raid6: .... xor() 4955 MB/s, rmw enabled Sep 6 00:03:01.921759 kernel: raid6: using neon recovery algorithm Sep 6 00:03:01.941958 kernel: xor: measuring software checksum speed Sep 6 00:03:01.942019 kernel: 8regs : 9298 MB/sec Sep 6 00:03:01.943906 kernel: 32regs : 11103 MB/sec Sep 6 00:03:01.945896 kernel: arm64_neon : 9574 MB/sec Sep 6 00:03:01.945926 kernel: xor: using function: 32regs (11103 MB/sec) Sep 6 00:03:02.043442 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 6 00:03:02.060768 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:03:02.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:02.070000 audit: BPF prog-id=7 op=LOAD Sep 6 00:03:02.070000 audit: BPF prog-id=8 op=LOAD Sep 6 00:03:02.073650 kernel: audit: type=1130 audit(1757116982.063:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:02.072639 systemd[1]: Starting systemd-udevd.service... Sep 6 00:03:02.103812 systemd-udevd[509]: Using default interface naming scheme 'v252'. Sep 6 00:03:02.112823 systemd[1]: Started systemd-udevd.service. Sep 6 00:03:02.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:02.125959 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:03:02.155148 dracut-pre-trigger[527]: rd.md=0: removing MD RAID activation Sep 6 00:03:02.216329 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:03:02.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:02.218536 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:03:02.319200 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:03:02.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:02.427594 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 6 00:03:02.427663 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 6 00:03:02.446114 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 6 00:03:02.446339 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 6 00:03:02.446615 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:49:de:90:59:79 Sep 6 00:03:02.449199 (udev-worker)[567]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:03:02.474621 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 6 00:03:02.474685 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 6 00:03:02.485445 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 6 00:03:02.492535 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:03:02.492580 kernel: GPT:9289727 != 16777215 Sep 6 00:03:02.492604 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:03:02.494710 kernel: GPT:9289727 != 16777215 Sep 6 00:03:02.495965 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:03:02.499406 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:03:02.570428 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (571) Sep 6 00:03:02.636270 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:03:02.652774 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:03:02.698749 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:03:02.698962 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:03:02.709748 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:03:02.715889 systemd[1]: Starting disk-uuid.service... Sep 6 00:03:02.733815 disk-uuid[673]: Primary Header is updated. Sep 6 00:03:02.733815 disk-uuid[673]: Secondary Entries is updated. Sep 6 00:03:02.733815 disk-uuid[673]: Secondary Header is updated. Sep 6 00:03:02.747433 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:03:03.761448 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:03:03.762005 disk-uuid[674]: The operation has completed successfully. Sep 6 00:03:03.934153 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:03:03.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:03.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:03.934351 systemd[1]: Finished disk-uuid.service. Sep 6 00:03:03.958274 systemd[1]: Starting verity-setup.service... Sep 6 00:03:03.995808 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 00:03:04.095646 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:03:04.101241 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:03:04.105085 systemd[1]: Finished verity-setup.service. Sep 6 00:03:04.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.199606 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:03:04.200827 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:03:04.204218 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:03:04.208460 systemd[1]: Starting ignition-setup.service... Sep 6 00:03:04.219365 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:03:04.244063 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:03:04.244129 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:03:04.246386 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:03:04.288278 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:03:04.303983 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:03:04.328187 systemd[1]: Finished ignition-setup.service. Sep 6 00:03:04.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.331728 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:03:04.370031 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:03:04.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.377000 audit: BPF prog-id=9 op=LOAD Sep 6 00:03:04.379549 systemd[1]: Starting systemd-networkd.service... Sep 6 00:03:04.428703 systemd-networkd[1187]: lo: Link UP Sep 6 00:03:04.428724 systemd-networkd[1187]: lo: Gained carrier Sep 6 00:03:04.432589 systemd-networkd[1187]: Enumeration completed Sep 6 00:03:04.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.433059 systemd-networkd[1187]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:03:04.433270 systemd[1]: Started systemd-networkd.service. Sep 6 00:03:04.435344 systemd[1]: Reached target network.target. Sep 6 00:03:04.451297 systemd[1]: Starting iscsiuio.service... Sep 6 00:03:04.460163 systemd-networkd[1187]: eth0: Link UP Sep 6 00:03:04.460183 systemd-networkd[1187]: eth0: Gained carrier Sep 6 00:03:04.469799 systemd[1]: Started iscsiuio.service. Sep 6 00:03:04.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.476351 systemd[1]: Starting iscsid.service... Sep 6 00:03:04.484383 systemd-networkd[1187]: eth0: DHCPv4 address 172.31.30.45/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 6 00:03:04.487682 iscsid[1192]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:03:04.487682 iscsid[1192]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:03:04.487682 iscsid[1192]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:03:04.487682 iscsid[1192]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:03:04.487682 iscsid[1192]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:03:04.487682 iscsid[1192]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:03:04.500921 systemd[1]: Started iscsid.service. Sep 6 00:03:04.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.517900 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:03:04.538495 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:03:04.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:04.544422 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:03:04.547842 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:03:04.551544 systemd[1]: Reached target remote-fs.target. Sep 6 00:03:04.556295 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:03:04.572513 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:03:04.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.116485 ignition[1160]: Ignition 2.14.0 Sep 6 00:03:05.117019 ignition[1160]: Stage: fetch-offline Sep 6 00:03:05.118513 ignition[1160]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:05.118579 ignition[1160]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:05.138248 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:05.141167 ignition[1160]: Ignition finished successfully Sep 6 00:03:05.144335 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:03:05.157935 kernel: kauditd_printk_skb: 16 callbacks suppressed Sep 6 00:03:05.157975 kernel: audit: type=1130 audit(1757116985.145:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.147947 systemd[1]: Starting ignition-fetch.service... Sep 6 00:03:05.167243 ignition[1211]: Ignition 2.14.0 Sep 6 00:03:05.167275 ignition[1211]: Stage: fetch Sep 6 00:03:05.167593 ignition[1211]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:05.167647 ignition[1211]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:05.182911 ignition[1211]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:05.181231 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:05.191879 ignition[1211]: INFO : PUT result: OK Sep 6 00:03:05.195540 ignition[1211]: DEBUG : parsed url from cmdline: "" Sep 6 00:03:05.195540 ignition[1211]: INFO : no config URL provided Sep 6 00:03:05.195540 ignition[1211]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:03:05.202493 ignition[1211]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 6 00:03:05.202493 ignition[1211]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:05.202493 ignition[1211]: INFO : PUT result: OK Sep 6 00:03:05.202493 ignition[1211]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 6 00:03:05.217249 ignition[1211]: INFO : GET result: OK Sep 6 00:03:05.217249 ignition[1211]: DEBUG : parsing config with SHA512: 3b2bcceb3e8b9bb2945adcae0fac64fb9019e1f3e64e32e68696875025cad19ec266e5a27838d666eecc9398607fa6fdb1b91310fe7c780523716a79c17fadc1 Sep 6 00:03:05.221990 unknown[1211]: fetched base config from "system" Sep 6 00:03:05.222009 unknown[1211]: fetched base config from "system" Sep 6 00:03:05.229566 ignition[1211]: fetch: fetch complete Sep 6 00:03:05.222032 unknown[1211]: fetched user config from "aws" Sep 6 00:03:05.229580 ignition[1211]: fetch: fetch passed Sep 6 00:03:05.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.234853 systemd[1]: Finished ignition-fetch.service. Sep 6 00:03:05.229681 ignition[1211]: Ignition finished successfully Sep 6 00:03:05.248343 systemd[1]: Starting ignition-kargs.service... Sep 6 00:03:05.255463 kernel: audit: type=1130 audit(1757116985.236:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.269340 ignition[1218]: Ignition 2.14.0 Sep 6 00:03:05.269369 ignition[1218]: Stage: kargs Sep 6 00:03:05.269683 ignition[1218]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:05.269742 ignition[1218]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:05.285130 ignition[1218]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:05.281787 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:05.290437 ignition[1218]: INFO : PUT result: OK Sep 6 00:03:05.295518 ignition[1218]: kargs: kargs passed Sep 6 00:03:05.295619 ignition[1218]: Ignition finished successfully Sep 6 00:03:05.300551 systemd[1]: Finished ignition-kargs.service. Sep 6 00:03:05.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.305231 systemd[1]: Starting ignition-disks.service... Sep 6 00:03:05.314916 kernel: audit: type=1130 audit(1757116985.302:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.322132 ignition[1224]: Ignition 2.14.0 Sep 6 00:03:05.324027 ignition[1224]: Stage: disks Sep 6 00:03:05.325646 ignition[1224]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:05.325805 ignition[1224]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:05.337112 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:05.340292 ignition[1224]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:05.342985 ignition[1224]: INFO : PUT result: OK Sep 6 00:03:05.348544 ignition[1224]: disks: disks passed Sep 6 00:03:05.348805 ignition[1224]: Ignition finished successfully Sep 6 00:03:05.355758 systemd[1]: Finished ignition-disks.service. Sep 6 00:03:05.372245 kernel: audit: type=1130 audit(1757116985.358:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.367763 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:03:05.372255 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:03:05.374117 systemd[1]: Reached target local-fs.target. Sep 6 00:03:05.377555 systemd[1]: Reached target sysinit.target. Sep 6 00:03:05.384295 systemd[1]: Reached target basic.target. Sep 6 00:03:05.391655 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:03:05.428133 systemd-fsck[1232]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 6 00:03:05.433092 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:03:05.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.443943 systemd[1]: Mounting sysroot.mount... Sep 6 00:03:05.451433 kernel: audit: type=1130 audit(1757116985.433:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.473453 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:03:05.477307 systemd[1]: Mounted sysroot.mount. Sep 6 00:03:05.480611 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:03:05.491961 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:03:05.497631 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:03:05.497709 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:03:05.497768 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:03:05.513896 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:03:05.535194 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:03:05.536708 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:03:05.558906 initrd-setup-root[1254]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:03:05.573440 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1249) Sep 6 00:03:05.574912 initrd-setup-root[1262]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:03:05.585310 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:03:05.585346 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:03:05.585378 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:03:05.592561 initrd-setup-root[1286]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:03:05.601420 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:03:05.606768 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:03:05.612678 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:03:05.821856 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:03:05.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.837009 kernel: audit: type=1130 audit(1757116985.825:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.836628 systemd[1]: Starting ignition-mount.service... Sep 6 00:03:05.843535 systemd[1]: Starting sysroot-boot.service... Sep 6 00:03:05.857710 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 00:03:05.857876 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 00:03:05.882095 ignition[1314]: INFO : Ignition 2.14.0 Sep 6 00:03:05.884071 ignition[1314]: INFO : Stage: mount Sep 6 00:03:05.884071 ignition[1314]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:05.884071 ignition[1314]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:05.902327 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:05.908496 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:05.908496 ignition[1314]: INFO : PUT result: OK Sep 6 00:03:05.917239 ignition[1314]: INFO : mount: mount passed Sep 6 00:03:05.920568 ignition[1314]: INFO : Ignition finished successfully Sep 6 00:03:05.924096 systemd[1]: Finished sysroot-boot.service. Sep 6 00:03:05.927922 systemd[1]: Finished ignition-mount.service. Sep 6 00:03:05.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.933103 systemd[1]: Starting ignition-files.service... Sep 6 00:03:05.948033 kernel: audit: type=1130 audit(1757116985.924:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.948073 kernel: audit: type=1130 audit(1757116985.930:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.957328 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:03:05.981442 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1325) Sep 6 00:03:05.988556 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:03:05.988602 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:03:05.988627 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:03:06.005429 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:03:06.010544 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:03:06.029891 ignition[1344]: INFO : Ignition 2.14.0 Sep 6 00:03:06.029891 ignition[1344]: INFO : Stage: files Sep 6 00:03:06.034598 ignition[1344]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:06.034598 ignition[1344]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:06.051612 ignition[1344]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:06.054301 ignition[1344]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:06.059675 ignition[1344]: INFO : PUT result: OK Sep 6 00:03:06.064964 ignition[1344]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:03:06.070633 ignition[1344]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:03:06.070633 ignition[1344]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:03:06.110631 ignition[1344]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:03:06.114088 ignition[1344]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:03:06.114088 ignition[1344]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:03:06.113024 unknown[1344]: wrote ssh authorized keys file for user: core Sep 6 00:03:06.122304 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 00:03:06.122304 ignition[1344]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 6 00:03:06.207223 ignition[1344]: INFO : GET result: OK Sep 6 00:03:06.410570 systemd-networkd[1187]: eth0: Gained IPv6LL Sep 6 00:03:06.508664 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 00:03:06.512860 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:03:06.516633 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:03:06.516633 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 6 00:03:06.525157 ignition[1344]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:03:06.533875 ignition[1344]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2822466489" Sep 6 00:03:06.536962 ignition[1344]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2822466489": device or resource busy Sep 6 00:03:06.536962 ignition[1344]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2822466489", trying btrfs: device or resource busy Sep 6 00:03:06.536962 ignition[1344]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2822466489" Sep 6 00:03:06.554969 ignition[1344]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2822466489" Sep 6 00:03:06.560569 ignition[1344]: INFO : op(3): [started] unmounting "/mnt/oem2822466489" Sep 6 00:03:06.560569 ignition[1344]: INFO : op(3): [finished] unmounting "/mnt/oem2822466489" Sep 6 00:03:06.560569 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 6 00:03:06.560569 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:03:06.560569 ignition[1344]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 6 00:03:06.563434 systemd[1]: mnt-oem2822466489.mount: Deactivated successfully. Sep 6 00:03:06.792817 ignition[1344]: INFO : GET result: OK Sep 6 00:03:06.948938 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:03:06.954662 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:03:06.954662 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:03:06.954662 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:03:06.954662 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:03:06.954662 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:03:06.954662 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:03:06.954662 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:03:06.954662 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:03:06.954662 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:03:06.954662 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:03:06.954662 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:03:06.954662 ignition[1344]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:03:07.004111 ignition[1344]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2001248365" Sep 6 00:03:07.004111 ignition[1344]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2001248365": device or resource busy Sep 6 00:03:07.004111 ignition[1344]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2001248365", trying btrfs: device or resource busy Sep 6 00:03:07.004111 ignition[1344]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2001248365" Sep 6 00:03:07.004111 ignition[1344]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2001248365" Sep 6 00:03:07.004111 ignition[1344]: INFO : op(6): [started] unmounting "/mnt/oem2001248365" Sep 6 00:03:07.004111 ignition[1344]: INFO : op(6): [finished] unmounting "/mnt/oem2001248365" Sep 6 00:03:07.004111 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:03:07.004111 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 6 00:03:07.004111 ignition[1344]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:03:07.041087 systemd[1]: mnt-oem2001248365.mount: Deactivated successfully. Sep 6 00:03:07.059154 ignition[1344]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3851726495" Sep 6 00:03:07.062163 ignition[1344]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3851726495": device or resource busy Sep 6 00:03:07.062163 ignition[1344]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3851726495", trying btrfs: device or resource busy Sep 6 00:03:07.062163 ignition[1344]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3851726495" Sep 6 00:03:07.072296 ignition[1344]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3851726495" Sep 6 00:03:07.072296 ignition[1344]: INFO : op(9): [started] unmounting "/mnt/oem3851726495" Sep 6 00:03:07.072296 ignition[1344]: INFO : op(9): [finished] unmounting "/mnt/oem3851726495" Sep 6 00:03:07.072296 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 6 00:03:07.072296 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:03:07.072296 ignition[1344]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 6 00:03:07.544431 ignition[1344]: INFO : GET result: OK Sep 6 00:03:08.095517 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:03:08.100159 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 6 00:03:08.100159 ignition[1344]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:03:08.117148 ignition[1344]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3015590750" Sep 6 00:03:08.120114 ignition[1344]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3015590750": device or resource busy Sep 6 00:03:08.120114 ignition[1344]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3015590750", trying btrfs: device or resource busy Sep 6 00:03:08.120114 ignition[1344]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3015590750" Sep 6 00:03:08.130929 ignition[1344]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3015590750" Sep 6 00:03:08.130929 ignition[1344]: INFO : op(c): [started] unmounting "/mnt/oem3015590750" Sep 6 00:03:08.137780 ignition[1344]: INFO : op(c): [finished] unmounting "/mnt/oem3015590750" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(10): [started] processing unit "nvidia.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(10): [finished] processing unit "nvidia.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(12): [started] processing unit "amazon-ssm-agent.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(12): op(13): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(12): op(13): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(12): [finished] processing unit "amazon-ssm-agent.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(16): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(17): [started] setting preset to enabled for "nvidia.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(17): [finished] setting preset to enabled for "nvidia.service" Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:03:08.137780 ignition[1344]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 6 00:03:08.213963 ignition[1344]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 6 00:03:08.219143 systemd[1]: mnt-oem3015590750.mount: Deactivated successfully. Sep 6 00:03:08.232304 ignition[1344]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:03:08.236207 ignition[1344]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:03:08.236207 ignition[1344]: INFO : files: files passed Sep 6 00:03:08.241780 ignition[1344]: INFO : Ignition finished successfully Sep 6 00:03:08.243748 systemd[1]: Finished ignition-files.service. Sep 6 00:03:08.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.255449 kernel: audit: type=1130 audit(1757116988.246:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.261607 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:03:08.266210 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:03:08.270320 systemd[1]: Starting ignition-quench.service... Sep 6 00:03:08.280671 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:03:08.283678 systemd[1]: Finished ignition-quench.service. Sep 6 00:03:08.294570 kernel: audit: type=1130 audit(1757116988.282:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.297594 initrd-setup-root-after-ignition[1369]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:03:08.301918 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:03:08.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.308628 systemd[1]: Reached target ignition-complete.target. Sep 6 00:03:08.313351 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:03:08.342226 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:03:08.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.342670 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:03:08.343083 systemd[1]: Reached target initrd-fs.target. Sep 6 00:03:08.343213 systemd[1]: Reached target initrd.target. Sep 6 00:03:08.343632 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:03:08.345022 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:03:08.376773 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:03:08.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.381700 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:03:08.409337 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:03:08.412873 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:03:08.416586 systemd[1]: Stopped target timers.target. Sep 6 00:03:08.420221 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:03:08.420448 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:03:08.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.425779 systemd[1]: Stopped target initrd.target. Sep 6 00:03:08.428907 systemd[1]: Stopped target basic.target. Sep 6 00:03:08.432387 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:03:08.435982 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:03:08.439431 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:03:08.441337 systemd[1]: Stopped target remote-fs.target. Sep 6 00:03:08.444743 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:03:08.448010 systemd[1]: Stopped target sysinit.target. Sep 6 00:03:08.451308 systemd[1]: Stopped target local-fs.target. Sep 6 00:03:08.454414 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:03:08.457659 systemd[1]: Stopped target swap.target. Sep 6 00:03:08.460798 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:03:08.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.462279 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:03:08.468643 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:03:08.470445 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:03:08.475769 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:03:08.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.479154 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:03:08.479241 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:03:08.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.483652 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:03:08.487630 systemd[1]: Stopped ignition-files.service. Sep 6 00:03:08.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.492110 systemd[1]: Stopping ignition-mount.service... Sep 6 00:03:08.503187 systemd[1]: Stopping iscsid.service... Sep 6 00:03:08.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.506556 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:03:08.518586 iscsid[1192]: iscsid shutting down. Sep 6 00:03:08.506669 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:03:08.511281 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:03:08.527437 ignition[1382]: INFO : Ignition 2.14.0 Sep 6 00:03:08.527437 ignition[1382]: INFO : Stage: umount Sep 6 00:03:08.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.540854 ignition[1382]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:08.540854 ignition[1382]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:08.527970 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:03:08.528112 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:03:08.530207 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:03:08.530293 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:03:08.535363 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:03:08.535610 systemd[1]: Stopped iscsid.service. Sep 6 00:03:08.566796 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:03:08.568921 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:03:08.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.580519 systemd[1]: Stopping iscsiuio.service... Sep 6 00:03:08.584189 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:03:08.586225 systemd[1]: Stopped iscsiuio.service. Sep 6 00:03:08.591547 ignition[1382]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:08.594291 ignition[1382]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:08.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.597791 ignition[1382]: INFO : PUT result: OK Sep 6 00:03:08.600832 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:03:08.607927 ignition[1382]: INFO : umount: umount passed Sep 6 00:03:08.610634 ignition[1382]: INFO : Ignition finished successfully Sep 6 00:03:08.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.610295 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:03:08.610543 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:03:08.618043 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:03:08.620232 systemd[1]: Stopped ignition-mount.service. Sep 6 00:03:08.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.623707 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:03:08.623806 systemd[1]: Stopped ignition-disks.service. Sep 6 00:03:08.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.632212 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:03:08.633803 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:03:08.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.638920 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:03:08.638998 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:03:08.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.642612 systemd[1]: Stopped target network.target. Sep 6 00:03:08.645871 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:03:08.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.647435 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:03:08.649467 systemd[1]: Stopped target paths.target. Sep 6 00:03:08.653154 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:03:08.656516 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:03:08.658467 systemd[1]: Stopped target slices.target. Sep 6 00:03:08.662244 systemd[1]: Stopped target sockets.target. Sep 6 00:03:08.665696 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:03:08.667222 systemd[1]: Closed iscsid.socket. Sep 6 00:03:08.670186 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:03:08.670254 systemd[1]: Closed iscsiuio.socket. Sep 6 00:03:08.673609 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:03:08.674971 systemd[1]: Stopped ignition-setup.service. Sep 6 00:03:08.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.682625 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:03:08.682704 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:03:08.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.688501 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:03:08.691765 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:03:08.695462 systemd-networkd[1187]: eth0: DHCPv6 lease lost Sep 6 00:03:08.697296 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:03:08.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.697534 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:03:08.703000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:03:08.705433 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:03:08.705690 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:03:08.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.710000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:03:08.712183 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:03:08.712270 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:03:08.718711 systemd[1]: Stopping network-cleanup.service... Sep 6 00:03:08.723492 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:03:08.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.725042 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:03:08.727195 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:03:08.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.727285 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:03:08.731111 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:03:08.731196 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:03:08.734991 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:03:08.748439 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:03:08.758253 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:03:08.758585 systemd[1]: Stopped network-cleanup.service. Sep 6 00:03:08.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.767274 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:03:08.769455 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:03:08.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.772918 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:03:08.773016 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:03:08.783428 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:03:08.783887 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:03:08.789720 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:03:08.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.789817 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:03:08.791597 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:03:08.791677 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:03:08.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.793834 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:03:08.793909 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:03:08.804932 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:03:08.818676 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:03:08.821095 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:03:08.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.827502 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:03:08.830072 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:03:08.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:08.834052 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:03:08.838976 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:03:08.853082 systemd[1]: Switching root. Sep 6 00:03:08.880582 systemd-journald[309]: Journal stopped Sep 6 00:03:14.938164 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Sep 6 00:03:14.938313 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:03:14.938361 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:03:14.938409 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:03:14.938453 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:03:14.938486 kernel: SELinux: policy capability open_perms=1 Sep 6 00:03:14.938517 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:03:14.938547 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:03:14.938578 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:03:14.938608 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:03:14.938641 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:03:14.938672 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:03:14.938711 systemd[1]: Successfully loaded SELinux policy in 130.401ms. Sep 6 00:03:14.938755 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.350ms. Sep 6 00:03:14.938790 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:03:14.938823 systemd[1]: Detected virtualization amazon. Sep 6 00:03:14.938854 systemd[1]: Detected architecture arm64. Sep 6 00:03:14.938888 systemd[1]: Detected first boot. Sep 6 00:03:14.938919 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:03:14.938953 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:03:14.938981 kernel: kauditd_printk_skb: 47 callbacks suppressed Sep 6 00:03:14.939016 kernel: audit: type=1400 audit(1757116990.297:84): avc: denied { associate } for pid=1415 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:03:14.939047 kernel: audit: type=1300 audit(1757116990.297:84): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1398 pid=1415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:14.939077 kernel: audit: type=1327 audit(1757116990.297:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:03:14.939115 kernel: audit: type=1400 audit(1757116990.314:85): avc: denied { associate } for pid=1415 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:03:14.939147 kernel: audit: type=1300 audit(1757116990.314:85): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145979 a2=1ed a3=0 items=2 ppid=1398 pid=1415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:14.939175 kernel: audit: type=1307 audit(1757116990.314:85): cwd="/" Sep 6 00:03:14.939208 kernel: audit: type=1302 audit(1757116990.314:85): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:03:14.939241 kernel: audit: type=1302 audit(1757116990.314:85): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:03:14.939272 kernel: audit: type=1327 audit(1757116990.314:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:03:14.939308 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:03:14.939341 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:03:14.939373 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:03:14.939429 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:03:14.939463 kernel: audit: type=1334 audit(1757116994.552:86): prog-id=12 op=LOAD Sep 6 00:03:14.939494 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:03:14.939528 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:03:14.939573 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:03:14.939606 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:03:14.939636 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:03:14.939669 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:03:14.939699 systemd[1]: Created slice system-getty.slice. Sep 6 00:03:14.939728 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:03:14.939760 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:03:14.939790 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:03:14.939825 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:03:14.939854 systemd[1]: Created slice user.slice. Sep 6 00:03:14.939884 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:03:14.939915 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:03:14.939945 systemd[1]: Set up automount boot.automount. Sep 6 00:03:14.939977 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:03:14.940007 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:03:14.940037 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:03:14.940066 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:03:14.940102 systemd[1]: Reached target integritysetup.target. Sep 6 00:03:14.940135 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:03:14.940167 systemd[1]: Reached target remote-fs.target. Sep 6 00:03:14.940197 systemd[1]: Reached target slices.target. Sep 6 00:03:14.940227 systemd[1]: Reached target swap.target. Sep 6 00:03:14.940256 systemd[1]: Reached target torcx.target. Sep 6 00:03:14.940288 systemd[1]: Reached target veritysetup.target. Sep 6 00:03:14.940317 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:03:14.940349 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:03:14.940383 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:03:14.940435 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:03:14.940467 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:03:14.940496 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:03:14.940530 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:03:14.940560 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:03:14.940589 systemd[1]: Mounting media.mount... Sep 6 00:03:14.940621 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:03:14.940651 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:03:14.940682 systemd[1]: Mounting tmp.mount... Sep 6 00:03:14.940717 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:03:14.940747 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:14.940780 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:03:14.940811 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:03:14.940844 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:03:14.940874 systemd[1]: Starting modprobe@drm.service... Sep 6 00:03:14.940906 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:03:14.940936 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:03:14.940965 systemd[1]: Starting modprobe@loop.service... Sep 6 00:03:14.941000 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:03:14.941030 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:03:14.941069 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:03:14.941099 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:03:14.941130 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:03:14.941162 systemd[1]: Stopped systemd-journald.service. Sep 6 00:03:14.941190 kernel: fuse: init (API version 7.34) Sep 6 00:03:14.941221 systemd[1]: Starting systemd-journald.service... Sep 6 00:03:14.941252 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:03:14.941286 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:03:14.941316 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:03:14.941345 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:03:14.941376 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:03:14.941424 systemd[1]: Stopped verity-setup.service. Sep 6 00:03:14.941459 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:03:14.941489 kernel: loop: module loaded Sep 6 00:03:14.941518 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:03:14.941551 systemd[1]: Mounted media.mount. Sep 6 00:03:14.941585 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:03:14.941615 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:03:14.941647 systemd[1]: Mounted tmp.mount. Sep 6 00:03:14.941677 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:03:14.941708 systemd-journald[1494]: Journal started Sep 6 00:03:14.941800 systemd-journald[1494]: Runtime Journal (/run/log/journal/ec244f4e17d53489808463d554cdd010) is 8.0M, max 75.4M, 67.4M free. Sep 6 00:03:09.836000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:03:10.065000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:03:10.065000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:03:10.065000 audit: BPF prog-id=10 op=LOAD Sep 6 00:03:10.065000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:03:10.065000 audit: BPF prog-id=11 op=LOAD Sep 6 00:03:10.065000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:03:10.297000 audit[1415]: AVC avc: denied { associate } for pid=1415 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:03:10.297000 audit[1415]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1398 pid=1415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:10.297000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:03:10.314000 audit[1415]: AVC avc: denied { associate } for pid=1415 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:03:10.314000 audit[1415]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145979 a2=1ed a3=0 items=2 ppid=1398 pid=1415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:10.314000 audit: CWD cwd="/" Sep 6 00:03:10.314000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:03:14.948289 systemd[1]: Started systemd-journald.service. Sep 6 00:03:10.314000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:03:10.314000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:03:14.552000 audit: BPF prog-id=12 op=LOAD Sep 6 00:03:14.555000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:03:14.555000 audit: BPF prog-id=13 op=LOAD Sep 6 00:03:14.555000 audit: BPF prog-id=14 op=LOAD Sep 6 00:03:14.555000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:03:14.555000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:03:14.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.564000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:03:14.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.847000 audit: BPF prog-id=15 op=LOAD Sep 6 00:03:14.847000 audit: BPF prog-id=16 op=LOAD Sep 6 00:03:14.847000 audit: BPF prog-id=17 op=LOAD Sep 6 00:03:14.847000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:03:14.847000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:03:14.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.933000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:03:14.933000 audit[1494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffff547c7b0 a2=4000 a3=1 items=0 ppid=1 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:14.933000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:03:14.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.550075 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:03:14.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:10.291605 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:03:14.550095 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 6 00:03:10.292148 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:03:14.559201 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:03:10.292198 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:03:14.947973 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:03:10.292264 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:03:14.948286 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:03:10.292291 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:03:14.952232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:03:10.292354 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:03:14.952561 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:03:10.292385 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:03:14.955259 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:03:10.292809 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:03:14.955546 systemd[1]: Finished modprobe@drm.service. Sep 6 00:03:10.292933 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:03:14.957911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:03:10.292976 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:03:14.958188 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:03:10.295350 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:03:10.295465 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:03:14.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:10.295514 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:03:14.964475 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:03:10.295555 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:03:14.964780 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:03:10.295606 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:03:14.967380 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:03:14.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:10.295644 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:10Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:03:14.967674 systemd[1]: Finished modprobe@loop.service. Sep 6 00:03:13.669092 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:13Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:03:14.970531 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:03:13.669637 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:13Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:03:14.973146 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:03:13.669929 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:13Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:03:13.670387 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:13Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:03:13.670542 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:13Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:03:13.670680 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2025-09-06T00:03:13Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:03:14.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.980145 systemd[1]: Reached target network-pre.target. Sep 6 00:03:14.984709 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:03:14.989339 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:03:14.997238 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:03:14.999518 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:03:15.008629 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:03:15.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.010961 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:03:15.013269 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:03:15.015382 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:03:15.020682 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:03:15.024910 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:03:15.026857 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:03:15.030206 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:03:15.065102 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:03:15.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.067279 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:03:15.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.078345 systemd-journald[1494]: Time spent on flushing to /var/log/journal/ec244f4e17d53489808463d554cdd010 is 63.082ms for 1134 entries. Sep 6 00:03:15.078345 systemd-journald[1494]: System Journal (/var/log/journal/ec244f4e17d53489808463d554cdd010) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:03:15.173037 systemd-journald[1494]: Received client request to flush runtime journal. Sep 6 00:03:15.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.076074 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:03:15.083222 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:03:15.090668 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:03:15.174674 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:03:15.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.213979 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:03:15.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.218707 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:03:15.234056 udevadm[1535]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:03:15.262421 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:03:15.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.929059 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:03:15.944775 kernel: kauditd_printk_skb: 45 callbacks suppressed Sep 6 00:03:15.944909 kernel: audit: type=1130 audit(1757116995.929:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:15.932000 audit: BPF prog-id=18 op=LOAD Sep 6 00:03:15.947853 kernel: audit: type=1334 audit(1757116995.932:131): prog-id=18 op=LOAD Sep 6 00:03:15.945755 systemd[1]: Starting systemd-udevd.service... Sep 6 00:03:15.943000 audit: BPF prog-id=19 op=LOAD Sep 6 00:03:15.952328 kernel: audit: type=1334 audit(1757116995.943:132): prog-id=19 op=LOAD Sep 6 00:03:15.952431 kernel: audit: type=1334 audit(1757116995.943:133): prog-id=7 op=UNLOAD Sep 6 00:03:15.943000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:03:15.943000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:03:15.957690 kernel: audit: type=1334 audit(1757116995.943:134): prog-id=8 op=UNLOAD Sep 6 00:03:15.988511 systemd-udevd[1536]: Using default interface naming scheme 'v252'. Sep 6 00:03:16.059419 kernel: audit: type=1130 audit(1757116996.050:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.049482 systemd[1]: Started systemd-udevd.service. Sep 6 00:03:16.058000 audit: BPF prog-id=20 op=LOAD Sep 6 00:03:16.063180 systemd[1]: Starting systemd-networkd.service... Sep 6 00:03:16.065463 kernel: audit: type=1334 audit(1757116996.058:136): prog-id=20 op=LOAD Sep 6 00:03:16.086434 kernel: audit: type=1334 audit(1757116996.079:137): prog-id=21 op=LOAD Sep 6 00:03:16.086556 kernel: audit: type=1334 audit(1757116996.081:138): prog-id=22 op=LOAD Sep 6 00:03:16.086611 kernel: audit: type=1334 audit(1757116996.084:139): prog-id=23 op=LOAD Sep 6 00:03:16.079000 audit: BPF prog-id=21 op=LOAD Sep 6 00:03:16.081000 audit: BPF prog-id=22 op=LOAD Sep 6 00:03:16.084000 audit: BPF prog-id=23 op=LOAD Sep 6 00:03:16.087821 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:03:16.157824 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 00:03:16.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.171860 systemd[1]: Started systemd-userdbd.service. Sep 6 00:03:16.200507 (udev-worker)[1543]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:03:16.383977 systemd-networkd[1539]: lo: Link UP Sep 6 00:03:16.384574 systemd-networkd[1539]: lo: Gained carrier Sep 6 00:03:16.385752 systemd-networkd[1539]: Enumeration completed Sep 6 00:03:16.386027 systemd[1]: Started systemd-networkd.service. Sep 6 00:03:16.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.390114 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:03:16.396816 systemd-networkd[1539]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:03:16.404447 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:03:16.410349 systemd-networkd[1539]: eth0: Link UP Sep 6 00:03:16.412647 systemd-networkd[1539]: eth0: Gained carrier Sep 6 00:03:16.426644 systemd-networkd[1539]: eth0: DHCPv4 address 172.31.30.45/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 6 00:03:16.531690 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:03:16.535058 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:03:16.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.539587 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:03:16.591650 lvm[1655]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:03:16.632971 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:03:16.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.635257 systemd[1]: Reached target cryptsetup.target. Sep 6 00:03:16.639551 systemd[1]: Starting lvm2-activation.service... Sep 6 00:03:16.648270 lvm[1656]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:03:16.683097 systemd[1]: Finished lvm2-activation.service. Sep 6 00:03:16.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.685195 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:03:16.687038 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:03:16.687080 systemd[1]: Reached target local-fs.target. Sep 6 00:03:16.688854 systemd[1]: Reached target machines.target. Sep 6 00:03:16.693100 systemd[1]: Starting ldconfig.service... Sep 6 00:03:16.696286 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:16.696420 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:16.698605 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:03:16.702312 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:03:16.713000 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:03:16.717723 systemd[1]: Starting systemd-sysext.service... Sep 6 00:03:16.720417 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1658 (bootctl) Sep 6 00:03:16.723048 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:03:16.755417 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:03:16.767591 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:03:16.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.780676 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:03:16.781026 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:03:16.806440 kernel: loop0: detected capacity change from 0 to 203944 Sep 6 00:03:16.873573 systemd-fsck[1668]: fsck.fat 4.2 (2021-01-31) Sep 6 00:03:16.873573 systemd-fsck[1668]: /dev/nvme0n1p1: 236 files, 117310/258078 clusters Sep 6 00:03:16.879826 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:03:16.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:16.884835 systemd[1]: Mounting boot.mount... Sep 6 00:03:16.912550 systemd[1]: Mounted boot.mount. Sep 6 00:03:16.948245 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:03:16.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.077591 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:03:17.111419 kernel: loop1: detected capacity change from 0 to 203944 Sep 6 00:03:17.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.124191 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:03:17.125211 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:03:17.135262 (sd-sysext)[1683]: Using extensions 'kubernetes'. Sep 6 00:03:17.137676 (sd-sysext)[1683]: Merged extensions into '/usr'. Sep 6 00:03:17.174432 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:03:17.178882 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.181870 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:03:17.186873 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:03:17.191188 systemd[1]: Starting modprobe@loop.service... Sep 6 00:03:17.193201 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.193507 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:17.200988 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:03:17.204329 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:03:17.204867 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:03:17.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.208307 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:03:17.208782 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:03:17.211889 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:03:17.212279 systemd[1]: Finished modprobe@loop.service. Sep 6 00:03:17.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.217447 systemd[1]: Finished systemd-sysext.service. Sep 6 00:03:17.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.224071 systemd[1]: Starting ensure-sysext.service... Sep 6 00:03:17.226620 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:03:17.226965 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.229352 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:03:17.244799 systemd[1]: Reloading. Sep 6 00:03:17.292661 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:03:17.309928 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:03:17.350755 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:03:17.371372 /usr/lib/systemd/system-generators/torcx-generator[1709]: time="2025-09-06T00:03:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:03:17.375756 /usr/lib/systemd/system-generators/torcx-generator[1709]: time="2025-09-06T00:03:17Z" level=info msg="torcx already run" Sep 6 00:03:17.582101 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:03:17.582141 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:03:17.622279 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:03:17.767000 audit: BPF prog-id=24 op=LOAD Sep 6 00:03:17.767000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:03:17.767000 audit: BPF prog-id=25 op=LOAD Sep 6 00:03:17.768000 audit: BPF prog-id=26 op=LOAD Sep 6 00:03:17.768000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:03:17.768000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:03:17.769000 audit: BPF prog-id=27 op=LOAD Sep 6 00:03:17.769000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:03:17.769000 audit: BPF prog-id=28 op=LOAD Sep 6 00:03:17.769000 audit: BPF prog-id=29 op=LOAD Sep 6 00:03:17.769000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:03:17.769000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:03:17.772000 audit: BPF prog-id=30 op=LOAD Sep 6 00:03:17.772000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:03:17.772000 audit: BPF prog-id=31 op=LOAD Sep 6 00:03:17.772000 audit: BPF prog-id=32 op=LOAD Sep 6 00:03:17.772000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:03:17.772000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:03:17.797303 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:03:17.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.809623 systemd[1]: Starting audit-rules.service... Sep 6 00:03:17.813695 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:03:17.818573 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:03:17.821000 audit: BPF prog-id=33 op=LOAD Sep 6 00:03:17.825114 systemd[1]: Starting systemd-resolved.service... Sep 6 00:03:17.835000 audit: BPF prog-id=34 op=LOAD Sep 6 00:03:17.841744 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:03:17.845970 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:03:17.860524 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.863910 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:03:17.869434 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:03:17.877426 systemd[1]: Starting modprobe@loop.service... Sep 6 00:03:17.879273 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.879637 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:17.881773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:03:17.883233 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:03:17.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.891242 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.893842 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:03:17.895685 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.896001 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:17.902424 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.907302 systemd[1]: Starting modprobe@drm.service... Sep 6 00:03:17.907000 audit[1772]: SYSTEM_BOOT pid=1772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.910862 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.911201 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:17.912854 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:03:17.913163 systemd[1]: Finished modprobe@loop.service. Sep 6 00:03:17.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.924045 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:03:17.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.931529 systemd[1]: Finished ensure-sysext.service. Sep 6 00:03:17.934505 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:03:17.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.935511 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:03:17.935796 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:03:17.938334 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:03:17.941769 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:03:17.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.952708 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:03:17.952978 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:03:17.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.958721 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:03:17.967648 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:03:17.967949 systemd[1]: Finished modprobe@drm.service. Sep 6 00:03:17.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:17.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:18.004898 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:03:18.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:18.058579 systemd-networkd[1539]: eth0: Gained IPv6LL Sep 6 00:03:18.062000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:03:18.062000 audit[1791]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdda8a690 a2=420 a3=0 items=0 ppid=1766 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:18.062000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:03:18.065317 augenrules[1791]: No rules Sep 6 00:03:18.066766 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:03:18.069860 systemd[1]: Finished audit-rules.service. Sep 6 00:03:18.094202 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:03:18.097217 systemd[1]: Reached target time-set.target. Sep 6 00:03:18.102075 systemd-resolved[1769]: Positive Trust Anchors: Sep 6 00:03:18.102660 systemd-resolved[1769]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:03:18.102816 systemd-resolved[1769]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:03:18.159359 systemd-resolved[1769]: Defaulting to hostname 'linux'. Sep 6 00:03:18.162502 systemd[1]: Started systemd-resolved.service. Sep 6 00:03:18.164532 systemd[1]: Reached target network.target. Sep 6 00:03:18.166258 systemd[1]: Reached target network-online.target. Sep 6 00:03:18.170337 systemd[1]: Reached target nss-lookup.target. Sep 6 00:03:18.230485 ldconfig[1657]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:03:18.239057 systemd[1]: Finished ldconfig.service. Sep 6 00:03:18.243119 systemd[1]: Starting systemd-update-done.service... Sep 6 00:03:18.258238 systemd[1]: Finished systemd-update-done.service. Sep 6 00:03:18.260372 systemd[1]: Reached target sysinit.target. Sep 6 00:03:18.262378 systemd[1]: Started motdgen.path. Sep 6 00:03:18.264206 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:03:18.266913 systemd[1]: Started logrotate.timer. Sep 6 00:03:18.268677 systemd[1]: Started mdadm.timer. Sep 6 00:03:18.270173 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:03:18.272062 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:03:18.272125 systemd[1]: Reached target paths.target. Sep 6 00:03:18.273799 systemd[1]: Reached target timers.target. Sep 6 00:03:18.275974 systemd[1]: Listening on dbus.socket. Sep 6 00:03:18.279548 systemd[1]: Starting docker.socket... Sep 6 00:03:18.286435 systemd[1]: Listening on sshd.socket. Sep 6 00:03:18.288322 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:18.289146 systemd[1]: Listening on docker.socket. Sep 6 00:03:18.291070 systemd[1]: Reached target sockets.target. Sep 6 00:03:18.292877 systemd[1]: Reached target basic.target. Sep 6 00:03:18.294612 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:03:18.294662 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:03:18.296709 systemd[1]: Started amazon-ssm-agent.service. Sep 6 00:03:18.303599 systemd[1]: Starting containerd.service... Sep 6 00:03:18.307326 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:03:18.312763 systemd[1]: Starting dbus.service... Sep 6 00:03:18.316730 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:03:18.326822 systemd[1]: Starting extend-filesystems.service... Sep 6 00:03:18.419934 jq[1804]: false Sep 6 00:03:18.330555 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:03:18.333030 systemd[1]: Starting kubelet.service... Sep 6 00:03:18.336878 systemd[1]: Starting motdgen.service... Sep 6 00:03:18.346888 systemd[1]: Started nvidia.service. Sep 6 00:03:18.351724 systemd[1]: Starting prepare-helm.service... Sep 6 00:03:18.355963 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:03:18.362680 systemd[1]: Starting sshd-keygen.service... Sep 6 00:03:18.368871 systemd[1]: Starting systemd-logind.service... Sep 6 00:03:18.525659 jq[1814]: true Sep 6 00:03:18.372627 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:18.372769 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:03:18.374295 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:03:18.375914 systemd[1]: Starting update-engine.service... Sep 6 00:03:18.381361 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:03:18.553162 tar[1817]: linux-arm64/helm Sep 6 00:03:18.417143 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:03:18.557581 jq[1823]: true Sep 6 00:03:18.417541 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:03:18.485651 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:03:18.486008 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:03:18.610100 extend-filesystems[1805]: Found loop1 Sep 6 00:03:18.614022 extend-filesystems[1805]: Found nvme0n1 Sep 6 00:03:18.615732 extend-filesystems[1805]: Found nvme0n1p3 Sep 6 00:03:18.615732 extend-filesystems[1805]: Found usr Sep 6 00:03:18.615732 extend-filesystems[1805]: Found nvme0n1p4 Sep 6 00:03:18.615732 extend-filesystems[1805]: Found nvme0n1p1 Sep 6 00:03:18.615732 extend-filesystems[1805]: Found nvme0n1p2 Sep 6 00:03:18.625781 extend-filesystems[1805]: Found nvme0n1p6 Sep 6 00:03:18.632761 extend-filesystems[1805]: Found nvme0n1p7 Sep 6 00:03:18.632761 extend-filesystems[1805]: Found nvme0n1p9 Sep 6 00:03:18.632761 extend-filesystems[1805]: Checking size of /dev/nvme0n1p9 Sep 6 00:03:18.632242 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:03:18.632618 systemd[1]: Finished motdgen.service. Sep 6 00:03:18.664294 dbus-daemon[1803]: [system] SELinux support is enabled Sep 6 00:03:18.665032 systemd[1]: Started dbus.service. Sep 6 00:03:18.669915 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:03:18.669956 systemd[1]: Reached target system-config.target. Sep 6 00:03:18.671995 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:03:18.672038 systemd[1]: Reached target user-config.target. Sep 6 00:03:18.708512 dbus-daemon[1803]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1539 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 6 00:03:18.715043 systemd[1]: Starting systemd-hostnamed.service... Sep 6 00:03:18.761575 extend-filesystems[1805]: Resized partition /dev/nvme0n1p9 Sep 6 00:03:18.781974 extend-filesystems[1866]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:03:18.815421 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 6 00:03:18.857611 update_engine[1813]: I0906 00:03:18.857092 1813 main.cc:92] Flatcar Update Engine starting Sep 6 00:03:18.901510 update_engine[1813]: I0906 00:03:18.881064 1813 update_check_scheduler.cc:74] Next update check in 6m54s Sep 6 00:03:18.873500 systemd[1]: Started update-engine.service. Sep 6 00:03:18.878483 systemd[1]: Started locksmithd.service. Sep 6 00:03:18.915421 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 6 00:03:18.916017 bash[1867]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:03:18.985337 amazon-ssm-agent[1800]: 2025/09/06 00:03:18 Failed to load instance info from vault. RegistrationKey does not exist. Sep 6 00:03:18.985337 amazon-ssm-agent[1800]: Initializing new seelog logger Sep 6 00:03:18.985337 amazon-ssm-agent[1800]: New Seelog Logger Creation Complete Sep 6 00:03:18.985337 amazon-ssm-agent[1800]: 2025/09/06 00:03:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 6 00:03:18.985337 amazon-ssm-agent[1800]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 6 00:03:18.985337 amazon-ssm-agent[1800]: 2025/09/06 00:03:18 processing appconfig overrides Sep 6 00:03:18.967295 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:03:18.986434 extend-filesystems[1866]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 6 00:03:18.986434 extend-filesystems[1866]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:03:18.986434 extend-filesystems[1866]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 6 00:03:18.967692 systemd[1]: Finished extend-filesystems.service. Sep 6 00:03:19.006490 extend-filesystems[1805]: Resized filesystem in /dev/nvme0n1p9 Sep 6 00:03:18.971211 systemd[1]: nvidia.service: Deactivated successfully. Sep 6 00:03:18.972278 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:03:18.972842 systemd-logind[1812]: Watching system buttons on /dev/input/event0 (Power Button) Sep 6 00:03:18.973213 systemd-logind[1812]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 6 00:03:18.979113 systemd-logind[1812]: New seat seat0. Sep 6 00:03:19.008952 systemd[1]: Started systemd-logind.service. Sep 6 00:03:19.172992 env[1827]: time="2025-09-06T00:03:19.172873992Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:03:19.269757 dbus-daemon[1803]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 6 00:03:19.270032 systemd[1]: Started systemd-hostnamed.service. Sep 6 00:03:19.277857 dbus-daemon[1803]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1853 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 6 00:03:19.283480 systemd[1]: Starting polkit.service... Sep 6 00:03:19.326919 polkitd[1922]: Started polkitd version 121 Sep 6 00:03:19.368488 env[1827]: time="2025-09-06T00:03:19.368222797Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:03:19.368733 env[1827]: time="2025-09-06T00:03:19.368684917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:19.385569 env[1827]: time="2025-09-06T00:03:19.385463857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:03:19.385569 env[1827]: time="2025-09-06T00:03:19.385561669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:19.386005 env[1827]: time="2025-09-06T00:03:19.385951513Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:03:19.386114 env[1827]: time="2025-09-06T00:03:19.386000077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:19.386114 env[1827]: time="2025-09-06T00:03:19.386033629Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:03:19.386114 env[1827]: time="2025-09-06T00:03:19.386058325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:19.386296 env[1827]: time="2025-09-06T00:03:19.386232313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:19.395575 polkitd[1922]: Loading rules from directory /etc/polkit-1/rules.d Sep 6 00:03:19.395704 polkitd[1922]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 6 00:03:19.397077 env[1827]: time="2025-09-06T00:03:19.396987637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:19.397519 env[1827]: time="2025-09-06T00:03:19.397439377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:03:19.397601 env[1827]: time="2025-09-06T00:03:19.397514425Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:03:19.397774 env[1827]: time="2025-09-06T00:03:19.397707109Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:03:19.397867 env[1827]: time="2025-09-06T00:03:19.397768201Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:03:19.404326 polkitd[1922]: Finished loading, compiling and executing 2 rules Sep 6 00:03:19.405198 dbus-daemon[1803]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 6 00:03:19.405470 systemd[1]: Started polkit.service. Sep 6 00:03:19.409882 polkitd[1922]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 6 00:03:19.410587 env[1827]: time="2025-09-06T00:03:19.410518297Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:03:19.410715 env[1827]: time="2025-09-06T00:03:19.410648209Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:03:19.410775 env[1827]: time="2025-09-06T00:03:19.410707873Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:03:19.410833 env[1827]: time="2025-09-06T00:03:19.410785441Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.410969 env[1827]: time="2025-09-06T00:03:19.410912869Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.411071 env[1827]: time="2025-09-06T00:03:19.410971717Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.411071 env[1827]: time="2025-09-06T00:03:19.411008689Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.411636 env[1827]: time="2025-09-06T00:03:19.411583657Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.411740 env[1827]: time="2025-09-06T00:03:19.411643297Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.411740 env[1827]: time="2025-09-06T00:03:19.411678277Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.411740 env[1827]: time="2025-09-06T00:03:19.411709405Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.411903 env[1827]: time="2025-09-06T00:03:19.411742717Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:03:19.412055 env[1827]: time="2025-09-06T00:03:19.412008097Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:03:19.412239 env[1827]: time="2025-09-06T00:03:19.412193677Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:03:19.412921 env[1827]: time="2025-09-06T00:03:19.412863853Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:03:19.413019 env[1827]: time="2025-09-06T00:03:19.412936069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.413019 env[1827]: time="2025-09-06T00:03:19.412972045Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:03:19.413242 env[1827]: time="2025-09-06T00:03:19.413201353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.413405 env[1827]: time="2025-09-06T00:03:19.413351449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.413471 env[1827]: time="2025-09-06T00:03:19.413421889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.413471 env[1827]: time="2025-09-06T00:03:19.413454637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.413585 env[1827]: time="2025-09-06T00:03:19.413488225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.413585 env[1827]: time="2025-09-06T00:03:19.413518981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.413585 env[1827]: time="2025-09-06T00:03:19.413548153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.413585 env[1827]: time="2025-09-06T00:03:19.413577205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.413793 env[1827]: time="2025-09-06T00:03:19.413613613Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:03:19.413925 env[1827]: time="2025-09-06T00:03:19.413882497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.414000 env[1827]: time="2025-09-06T00:03:19.413934505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.414000 env[1827]: time="2025-09-06T00:03:19.413969209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.414103 env[1827]: time="2025-09-06T00:03:19.413999557Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:03:19.414103 env[1827]: time="2025-09-06T00:03:19.414031009Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:03:19.414103 env[1827]: time="2025-09-06T00:03:19.414056809Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:03:19.414103 env[1827]: time="2025-09-06T00:03:19.414093289Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:03:19.414324 env[1827]: time="2025-09-06T00:03:19.414157117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:03:19.414653 env[1827]: time="2025-09-06T00:03:19.414545089Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:03:19.416516 env[1827]: time="2025-09-06T00:03:19.414658621Z" level=info msg="Connect containerd service" Sep 6 00:03:19.416516 env[1827]: time="2025-09-06T00:03:19.414715681Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:03:19.416516 env[1827]: time="2025-09-06T00:03:19.415868689Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:03:19.416516 env[1827]: time="2025-09-06T00:03:19.416464801Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:03:19.419970 env[1827]: time="2025-09-06T00:03:19.416559169Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:03:19.419970 env[1827]: time="2025-09-06T00:03:19.416659333Z" level=info msg="containerd successfully booted in 0.422235s" Sep 6 00:03:19.416780 systemd[1]: Started containerd.service. Sep 6 00:03:19.438145 env[1827]: time="2025-09-06T00:03:19.438043778Z" level=info msg="Start subscribing containerd event" Sep 6 00:03:19.438303 env[1827]: time="2025-09-06T00:03:19.438160370Z" level=info msg="Start recovering state" Sep 6 00:03:19.438303 env[1827]: time="2025-09-06T00:03:19.438291902Z" level=info msg="Start event monitor" Sep 6 00:03:19.438536 env[1827]: time="2025-09-06T00:03:19.438331934Z" level=info msg="Start snapshots syncer" Sep 6 00:03:19.438536 env[1827]: time="2025-09-06T00:03:19.438355502Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:03:19.438536 env[1827]: time="2025-09-06T00:03:19.438378014Z" level=info msg="Start streaming server" Sep 6 00:03:19.486325 systemd-hostnamed[1853]: Hostname set to (transient) Sep 6 00:03:19.486505 systemd-resolved[1769]: System hostname changed to 'ip-172-31-30-45'. Sep 6 00:03:19.583427 coreos-metadata[1802]: Sep 06 00:03:19.582 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 6 00:03:19.585643 coreos-metadata[1802]: Sep 06 00:03:19.585 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 6 00:03:19.587088 coreos-metadata[1802]: Sep 06 00:03:19.586 INFO Fetch successful Sep 6 00:03:19.587503 coreos-metadata[1802]: Sep 06 00:03:19.587 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 6 00:03:19.596009 coreos-metadata[1802]: Sep 06 00:03:19.594 INFO Fetch successful Sep 6 00:03:19.602366 unknown[1802]: wrote ssh authorized keys file for user: core Sep 6 00:03:19.641344 update-ssh-keys[1963]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:03:19.642624 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:03:19.690832 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO Create new startup processor Sep 6 00:03:19.704975 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [LongRunningPluginsManager] registered plugins: {} Sep 6 00:03:19.705121 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO Initializing bookkeeping folders Sep 6 00:03:19.705121 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO removing the completed state files Sep 6 00:03:19.705121 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO Initializing bookkeeping folders for long running plugins Sep 6 00:03:19.705121 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 6 00:03:19.705121 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO Initializing healthcheck folders for long running plugins Sep 6 00:03:19.705121 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO Initializing locations for inventory plugin Sep 6 00:03:19.705445 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO Initializing default location for custom inventory Sep 6 00:03:19.705445 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO Initializing default location for file inventory Sep 6 00:03:19.705445 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO Initializing default location for role inventory Sep 6 00:03:19.705445 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO Init the cloudwatchlogs publisher Sep 6 00:03:19.705445 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [instanceID=i-06d33744ff7f537aa] Successfully loaded platform independent plugin aws:runDockerAction Sep 6 00:03:19.705445 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [instanceID=i-06d33744ff7f537aa] Successfully loaded platform independent plugin aws:downloadContent Sep 6 00:03:19.705445 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [instanceID=i-06d33744ff7f537aa] Successfully loaded platform independent plugin aws:configurePackage Sep 6 00:03:19.705445 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [instanceID=i-06d33744ff7f537aa] Successfully loaded platform independent plugin aws:runDocument Sep 6 00:03:19.705445 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [instanceID=i-06d33744ff7f537aa] Successfully loaded platform independent plugin aws:softwareInventory Sep 6 00:03:19.705445 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [instanceID=i-06d33744ff7f537aa] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 6 00:03:19.705445 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [instanceID=i-06d33744ff7f537aa] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 6 00:03:19.705445 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [instanceID=i-06d33744ff7f537aa] Successfully loaded platform independent plugin aws:configureDocker Sep 6 00:03:19.705445 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [instanceID=i-06d33744ff7f537aa] Successfully loaded platform independent plugin aws:refreshAssociation Sep 6 00:03:19.706050 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [instanceID=i-06d33744ff7f537aa] Successfully loaded platform dependent plugin aws:runShellScript Sep 6 00:03:19.706050 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 6 00:03:19.706050 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO OS: linux, Arch: arm64 Sep 6 00:03:19.708770 amazon-ssm-agent[1800]: datastore file /var/lib/amazon/ssm/i-06d33744ff7f537aa/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 6 00:03:19.791120 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] Starting document processing engine... Sep 6 00:03:19.886589 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 6 00:03:19.980153 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 6 00:03:20.074678 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] Starting message polling Sep 6 00:03:20.169462 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 6 00:03:20.264301 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [instanceID=i-06d33744ff7f537aa] Starting association polling Sep 6 00:03:20.278870 tar[1817]: linux-arm64/LICENSE Sep 6 00:03:20.279518 tar[1817]: linux-arm64/README.md Sep 6 00:03:20.286871 systemd[1]: Finished prepare-helm.service. Sep 6 00:03:20.359413 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 6 00:03:20.455095 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 6 00:03:20.513806 locksmithd[1878]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:03:20.550230 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 6 00:03:20.645907 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 6 00:03:20.741843 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 6 00:03:20.837863 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessageGatewayService] Starting session document processing engine... Sep 6 00:03:20.934131 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 6 00:03:21.030671 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 6 00:03:21.127284 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-06d33744ff7f537aa, requestId: cb38a64a-5126-4391-a4c1-d14c5c8eac09 Sep 6 00:03:21.224148 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [OfflineService] Starting document processing engine... Sep 6 00:03:21.321259 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [OfflineService] [EngineProcessor] Starting Sep 6 00:03:21.418492 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [OfflineService] [EngineProcessor] Initial processing Sep 6 00:03:21.515963 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [OfflineService] Starting message polling Sep 6 00:03:21.613608 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [OfflineService] Starting send replies to MDS Sep 6 00:03:21.711504 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 6 00:03:21.753660 systemd[1]: Started kubelet.service. Sep 6 00:03:21.809581 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 6 00:03:21.907878 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [HealthCheck] HealthCheck reporting agent health. Sep 6 00:03:22.006252 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 6 00:03:22.104893 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessageGatewayService] listening reply. Sep 6 00:03:22.203785 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [StartupProcessor] Executing startup processor tasks Sep 6 00:03:22.302749 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 6 00:03:22.401992 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 6 00:03:22.501537 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 6 00:03:22.601046 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-06d33744ff7f537aa?role=subscribe&stream=input Sep 6 00:03:22.700834 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-06d33744ff7f537aa?role=subscribe&stream=input Sep 6 00:03:22.800924 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessageGatewayService] Starting receiving message from control channel Sep 6 00:03:22.901077 amazon-ssm-agent[1800]: 2025-09-06 00:03:19 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 6 00:03:23.041108 kubelet[2004]: E0906 00:03:23.041046 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:03:23.045248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:03:23.045598 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:03:23.046057 systemd[1]: kubelet.service: Consumed 1.591s CPU time. Sep 6 00:03:23.343810 sshd_keygen[1836]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:03:23.379886 systemd[1]: Finished sshd-keygen.service. Sep 6 00:03:23.384529 systemd[1]: Starting issuegen.service... Sep 6 00:03:23.393661 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:03:23.394017 systemd[1]: Finished issuegen.service. Sep 6 00:03:23.398929 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:03:23.412999 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:03:23.418075 systemd[1]: Started getty@tty1.service. Sep 6 00:03:23.422988 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:03:23.425500 systemd[1]: Reached target getty.target. Sep 6 00:03:23.427785 systemd[1]: Reached target multi-user.target. Sep 6 00:03:23.432354 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:03:23.449998 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:03:23.450386 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:03:23.452764 systemd[1]: Startup finished in 1.198s (kernel) + 9.087s (initrd) + 13.763s (userspace) = 24.049s. Sep 6 00:03:26.804451 systemd[1]: Created slice system-sshd.slice. Sep 6 00:03:26.807626 systemd[1]: Started sshd@0-172.31.30.45:22-147.75.109.163:33036.service. Sep 6 00:03:27.085666 sshd[2025]: Accepted publickey for core from 147.75.109.163 port 33036 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:27.091291 sshd[2025]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:27.110780 systemd[1]: Created slice user-500.slice. Sep 6 00:03:27.113135 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:03:27.119139 systemd-logind[1812]: New session 1 of user core. Sep 6 00:03:27.135415 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:03:27.139684 systemd[1]: Starting user@500.service... Sep 6 00:03:27.146981 (systemd)[2028]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:27.334938 systemd[2028]: Queued start job for default target default.target. Sep 6 00:03:27.336383 systemd[2028]: Reached target paths.target. Sep 6 00:03:27.336676 systemd[2028]: Reached target sockets.target. Sep 6 00:03:27.337085 systemd[2028]: Reached target timers.target. Sep 6 00:03:27.337240 systemd[2028]: Reached target basic.target. Sep 6 00:03:27.337490 systemd[2028]: Reached target default.target. Sep 6 00:03:27.337567 systemd[1]: Started user@500.service. Sep 6 00:03:27.337786 systemd[2028]: Startup finished in 178ms. Sep 6 00:03:27.341840 systemd[1]: Started session-1.scope. Sep 6 00:03:27.496443 systemd[1]: Started sshd@1-172.31.30.45:22-147.75.109.163:33048.service. Sep 6 00:03:27.659471 sshd[2037]: Accepted publickey for core from 147.75.109.163 port 33048 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:27.662619 sshd[2037]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:27.671512 systemd-logind[1812]: New session 2 of user core. Sep 6 00:03:27.671530 systemd[1]: Started session-2.scope. Sep 6 00:03:27.801054 sshd[2037]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:27.806212 systemd-logind[1812]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:03:27.806838 systemd[1]: sshd@1-172.31.30.45:22-147.75.109.163:33048.service: Deactivated successfully. Sep 6 00:03:27.808018 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:03:27.809655 systemd-logind[1812]: Removed session 2. Sep 6 00:03:27.831033 systemd[1]: Started sshd@2-172.31.30.45:22-147.75.109.163:33062.service. Sep 6 00:03:28.005919 sshd[2043]: Accepted publickey for core from 147.75.109.163 port 33062 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:28.008871 sshd[2043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:28.017278 systemd[1]: Started session-3.scope. Sep 6 00:03:28.018270 systemd-logind[1812]: New session 3 of user core. Sep 6 00:03:28.142812 sshd[2043]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:28.148054 systemd-logind[1812]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:03:28.148684 systemd[1]: sshd@2-172.31.30.45:22-147.75.109.163:33062.service: Deactivated successfully. Sep 6 00:03:28.149824 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:03:28.151474 systemd-logind[1812]: Removed session 3. Sep 6 00:03:28.169362 systemd[1]: Started sshd@3-172.31.30.45:22-147.75.109.163:33070.service. Sep 6 00:03:28.328158 systemd-timesyncd[1771]: Timed out waiting for reply from 50.117.3.95:123 (0.flatcar.pool.ntp.org). Sep 6 00:03:28.337499 sshd[2049]: Accepted publickey for core from 147.75.109.163 port 33070 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:28.339957 sshd[2049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:28.347485 systemd-logind[1812]: New session 4 of user core. Sep 6 00:03:28.348692 systemd[1]: Started session-4.scope. Sep 6 00:03:28.389679 systemd-timesyncd[1771]: Contacted time server 139.94.144.123:123 (0.flatcar.pool.ntp.org). Sep 6 00:03:28.389789 systemd-timesyncd[1771]: Initial clock synchronization to Sat 2025-09-06 00:03:28.499694 UTC. Sep 6 00:03:28.478512 sshd[2049]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:28.483796 systemd-logind[1812]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:03:28.484980 systemd[1]: sshd@3-172.31.30.45:22-147.75.109.163:33070.service: Deactivated successfully. Sep 6 00:03:28.486197 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:03:28.487621 systemd-logind[1812]: Removed session 4. Sep 6 00:03:28.506974 systemd[1]: Started sshd@4-172.31.30.45:22-147.75.109.163:33086.service. Sep 6 00:03:28.675303 sshd[2055]: Accepted publickey for core from 147.75.109.163 port 33086 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:28.677053 sshd[2055]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:28.685057 systemd-logind[1812]: New session 5 of user core. Sep 6 00:03:28.685960 systemd[1]: Started session-5.scope. Sep 6 00:03:28.883355 sudo[2058]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:03:28.884465 sudo[2058]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:03:28.964165 systemd[1]: Starting docker.service... Sep 6 00:03:29.086411 env[2068]: time="2025-09-06T00:03:29.086320424Z" level=info msg="Starting up" Sep 6 00:03:29.090267 env[2068]: time="2025-09-06T00:03:29.090220233Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:03:29.090515 env[2068]: time="2025-09-06T00:03:29.090484660Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:03:29.090656 env[2068]: time="2025-09-06T00:03:29.090623139Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:03:29.090782 env[2068]: time="2025-09-06T00:03:29.090755157Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:03:29.094561 env[2068]: time="2025-09-06T00:03:29.094513483Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:03:29.094740 env[2068]: time="2025-09-06T00:03:29.094711511Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:03:29.094878 env[2068]: time="2025-09-06T00:03:29.094844345Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:03:29.094986 env[2068]: time="2025-09-06T00:03:29.094958809Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:03:29.106632 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1212087035-merged.mount: Deactivated successfully. Sep 6 00:03:29.139483 env[2068]: time="2025-09-06T00:03:29.139367591Z" level=info msg="Loading containers: start." Sep 6 00:03:29.257756 amazon-ssm-agent[1800]: 2025-09-06 00:03:29 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 6 00:03:29.412456 kernel: Initializing XFRM netlink socket Sep 6 00:03:29.470683 env[2068]: time="2025-09-06T00:03:29.470635565Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:03:29.474440 (udev-worker)[2079]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:03:29.607340 systemd-networkd[1539]: docker0: Link UP Sep 6 00:03:29.631057 env[2068]: time="2025-09-06T00:03:29.630984361Z" level=info msg="Loading containers: done." Sep 6 00:03:29.663842 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3788174843-merged.mount: Deactivated successfully. Sep 6 00:03:29.675928 env[2068]: time="2025-09-06T00:03:29.675854589Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:03:29.676203 env[2068]: time="2025-09-06T00:03:29.676163857Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:03:29.676418 env[2068]: time="2025-09-06T00:03:29.676351459Z" level=info msg="Daemon has completed initialization" Sep 6 00:03:29.702726 systemd[1]: Started docker.service. Sep 6 00:03:29.712783 env[2068]: time="2025-09-06T00:03:29.712561647Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:03:30.905387 env[1827]: time="2025-09-06T00:03:30.905266753Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:03:31.574739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055389151.mount: Deactivated successfully. Sep 6 00:03:33.078993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:03:33.079319 systemd[1]: Stopped kubelet.service. Sep 6 00:03:33.079389 systemd[1]: kubelet.service: Consumed 1.591s CPU time. Sep 6 00:03:33.081955 systemd[1]: Starting kubelet.service... Sep 6 00:03:33.510189 systemd[1]: Started kubelet.service. Sep 6 00:03:33.513521 env[1827]: time="2025-09-06T00:03:33.513158827Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:33.527096 env[1827]: time="2025-09-06T00:03:33.527037775Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:33.536222 env[1827]: time="2025-09-06T00:03:33.536162115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:33.543506 env[1827]: time="2025-09-06T00:03:33.543450077Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:33.545220 env[1827]: time="2025-09-06T00:03:33.545172816Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 6 00:03:33.548145 env[1827]: time="2025-09-06T00:03:33.548075829Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:03:33.596415 kubelet[2196]: E0906 00:03:33.596318 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:03:33.605085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:03:33.605389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:03:35.407262 env[1827]: time="2025-09-06T00:03:35.407190114Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:35.409971 env[1827]: time="2025-09-06T00:03:35.409909857Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:35.413377 env[1827]: time="2025-09-06T00:03:35.413323784Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:35.418423 env[1827]: time="2025-09-06T00:03:35.418328164Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 6 00:03:35.419083 env[1827]: time="2025-09-06T00:03:35.419016407Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:03:35.419355 env[1827]: time="2025-09-06T00:03:35.416657453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:36.957270 env[1827]: time="2025-09-06T00:03:36.957213070Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:36.962213 env[1827]: time="2025-09-06T00:03:36.962160483Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:36.967000 env[1827]: time="2025-09-06T00:03:36.965215774Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:36.970501 env[1827]: time="2025-09-06T00:03:36.969132743Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:36.971023 env[1827]: time="2025-09-06T00:03:36.970981382Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 6 00:03:36.971785 env[1827]: time="2025-09-06T00:03:36.971738832Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:03:38.374844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1190543012.mount: Deactivated successfully. Sep 6 00:03:39.243026 env[1827]: time="2025-09-06T00:03:39.242966608Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:39.245115 env[1827]: time="2025-09-06T00:03:39.245067779Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:39.247036 env[1827]: time="2025-09-06T00:03:39.246971870Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:39.249153 env[1827]: time="2025-09-06T00:03:39.249095237Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:39.250302 env[1827]: time="2025-09-06T00:03:39.250260543Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 6 00:03:39.251103 env[1827]: time="2025-09-06T00:03:39.251052103Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:03:39.806011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2763085675.mount: Deactivated successfully. Sep 6 00:03:41.143005 env[1827]: time="2025-09-06T00:03:41.142934675Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.145531 env[1827]: time="2025-09-06T00:03:41.145469143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.149182 env[1827]: time="2025-09-06T00:03:41.149128943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.152692 env[1827]: time="2025-09-06T00:03:41.152641623Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.154475 env[1827]: time="2025-09-06T00:03:41.154376093Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 6 00:03:41.155120 env[1827]: time="2025-09-06T00:03:41.155060704Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:03:41.676455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1390735403.mount: Deactivated successfully. Sep 6 00:03:41.684630 env[1827]: time="2025-09-06T00:03:41.684575985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.686745 env[1827]: time="2025-09-06T00:03:41.686700286Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.689067 env[1827]: time="2025-09-06T00:03:41.689007228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.691585 env[1827]: time="2025-09-06T00:03:41.691537436Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.692796 env[1827]: time="2025-09-06T00:03:41.692753836Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 6 00:03:41.693583 env[1827]: time="2025-09-06T00:03:41.693534039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:03:42.323196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount48360662.mount: Deactivated successfully. Sep 6 00:03:43.828973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:03:43.829307 systemd[1]: Stopped kubelet.service. Sep 6 00:03:43.831864 systemd[1]: Starting kubelet.service... Sep 6 00:03:44.167134 systemd[1]: Started kubelet.service. Sep 6 00:03:44.252554 kubelet[2206]: E0906 00:03:44.252497 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:03:44.257285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:03:44.257629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:03:45.183541 env[1827]: time="2025-09-06T00:03:45.183480989Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:45.188284 env[1827]: time="2025-09-06T00:03:45.188235084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:45.192932 env[1827]: time="2025-09-06T00:03:45.192866004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:45.204963 env[1827]: time="2025-09-06T00:03:45.204885086Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:45.205683 env[1827]: time="2025-09-06T00:03:45.205618914Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 6 00:03:49.519854 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 6 00:03:52.796036 systemd[1]: Stopped kubelet.service. Sep 6 00:03:52.801544 systemd[1]: Starting kubelet.service... Sep 6 00:03:52.880139 systemd[1]: Reloading. Sep 6 00:03:53.056023 /usr/lib/systemd/system-generators/torcx-generator[2258]: time="2025-09-06T00:03:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:03:53.056098 /usr/lib/systemd/system-generators/torcx-generator[2258]: time="2025-09-06T00:03:53Z" level=info msg="torcx already run" Sep 6 00:03:53.219688 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:03:53.219729 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:03:53.261014 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:03:53.504180 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 00:03:53.504378 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 00:03:53.504982 systemd[1]: Stopped kubelet.service. Sep 6 00:03:53.508876 systemd[1]: Starting kubelet.service... Sep 6 00:03:54.572794 systemd[1]: Started kubelet.service. Sep 6 00:03:54.659557 kubelet[2318]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:03:54.660223 kubelet[2318]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:03:54.660360 kubelet[2318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:03:54.660861 kubelet[2318]: I0906 00:03:54.660776 2318 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:03:55.907253 kubelet[2318]: I0906 00:03:55.907192 2318 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:03:55.908098 kubelet[2318]: I0906 00:03:55.908064 2318 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:03:55.908755 kubelet[2318]: I0906 00:03:55.908718 2318 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:03:55.976271 kubelet[2318]: E0906 00:03:55.976178 2318 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.45:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:55.978815 kubelet[2318]: I0906 00:03:55.978761 2318 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:03:56.002720 kubelet[2318]: E0906 00:03:56.002621 2318 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:03:56.002720 kubelet[2318]: I0906 00:03:56.002704 2318 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:03:56.010965 kubelet[2318]: I0906 00:03:56.010916 2318 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:03:56.011584 kubelet[2318]: I0906 00:03:56.011548 2318 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:03:56.011934 kubelet[2318]: I0906 00:03:56.011876 2318 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:03:56.012232 kubelet[2318]: I0906 00:03:56.011937 2318 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-45","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:03:56.012469 kubelet[2318]: I0906 00:03:56.012386 2318 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:03:56.012578 kubelet[2318]: I0906 00:03:56.012475 2318 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:03:56.012973 kubelet[2318]: I0906 00:03:56.012939 2318 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:03:56.021499 kubelet[2318]: I0906 00:03:56.021427 2318 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:03:56.021499 kubelet[2318]: I0906 00:03:56.021499 2318 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:03:56.021754 kubelet[2318]: I0906 00:03:56.021546 2318 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:03:56.021754 kubelet[2318]: I0906 00:03:56.021703 2318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:03:56.037784 kubelet[2318]: W0906 00:03:56.037724 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-45&limit=500&resourceVersion=0": dial tcp 172.31.30.45:6443: connect: connection refused Sep 6 00:03:56.038065 kubelet[2318]: E0906 00:03:56.038028 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-45&limit=500&resourceVersion=0\": dial tcp 172.31.30.45:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:56.039981 kubelet[2318]: W0906 00:03:56.039903 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.45:6443: connect: connection refused Sep 6 00:03:56.040272 kubelet[2318]: E0906 00:03:56.040236 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.45:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:56.040933 kubelet[2318]: I0906 00:03:56.040889 2318 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:03:56.042788 kubelet[2318]: I0906 00:03:56.042745 2318 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:03:56.043349 kubelet[2318]: W0906 00:03:56.043319 2318 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:03:56.051309 kubelet[2318]: I0906 00:03:56.051265 2318 server.go:1274] "Started kubelet" Sep 6 00:03:56.060814 kubelet[2318]: I0906 00:03:56.060726 2318 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:03:56.062681 kubelet[2318]: I0906 00:03:56.062587 2318 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:03:56.063302 kubelet[2318]: I0906 00:03:56.063235 2318 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:03:56.063690 kubelet[2318]: I0906 00:03:56.063655 2318 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:03:56.066518 kubelet[2318]: E0906 00:03:56.064124 2318 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.45:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.45:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-45.186288a27a9edb95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-45,UID:ip-172-31-30-45,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-45,},FirstTimestamp:2025-09-06 00:03:56.051225493 +0000 UTC m=+1.470541372,LastTimestamp:2025-09-06 00:03:56.051225493 +0000 UTC m=+1.470541372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-45,}" Sep 6 00:03:56.072102 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:03:56.072451 kubelet[2318]: I0906 00:03:56.072380 2318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:03:56.074974 kubelet[2318]: E0906 00:03:56.074931 2318 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:03:56.075729 kubelet[2318]: I0906 00:03:56.075686 2318 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:03:56.076576 kubelet[2318]: I0906 00:03:56.076511 2318 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:03:56.077347 kubelet[2318]: E0906 00:03:56.077297 2318 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-45\" not found" Sep 6 00:03:56.078280 kubelet[2318]: I0906 00:03:56.078241 2318 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:03:56.079498 kubelet[2318]: I0906 00:03:56.079461 2318 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:03:56.081694 kubelet[2318]: I0906 00:03:56.081639 2318 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:03:56.081874 kubelet[2318]: I0906 00:03:56.081813 2318 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:03:56.082520 kubelet[2318]: W0906 00:03:56.082436 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.45:6443: connect: connection refused Sep 6 00:03:56.082675 kubelet[2318]: E0906 00:03:56.082544 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.45:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:56.083020 kubelet[2318]: E0906 00:03:56.082949 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-45?timeout=10s\": dial tcp 172.31.30.45:6443: connect: connection refused" interval="200ms" Sep 6 00:03:56.087289 kubelet[2318]: I0906 00:03:56.087248 2318 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:03:56.116967 kubelet[2318]: I0906 00:03:56.116912 2318 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:03:56.116967 kubelet[2318]: I0906 00:03:56.116952 2318 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:03:56.117212 kubelet[2318]: I0906 00:03:56.116990 2318 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:03:56.121493 kubelet[2318]: I0906 00:03:56.121365 2318 policy_none.go:49] "None policy: Start" Sep 6 00:03:56.122633 kubelet[2318]: I0906 00:03:56.122577 2318 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:03:56.122633 kubelet[2318]: I0906 00:03:56.122633 2318 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:03:56.134602 systemd[1]: Created slice kubepods.slice. Sep 6 00:03:56.147950 kubelet[2318]: I0906 00:03:56.147896 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:03:56.150308 kubelet[2318]: I0906 00:03:56.150264 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:03:56.150582 kubelet[2318]: I0906 00:03:56.150555 2318 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:03:56.150756 kubelet[2318]: I0906 00:03:56.150733 2318 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:03:56.150976 kubelet[2318]: E0906 00:03:56.150941 2318 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:03:56.151296 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:03:56.162312 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:03:56.167232 kubelet[2318]: W0906 00:03:56.167149 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.45:6443: connect: connection refused Sep 6 00:03:56.167667 kubelet[2318]: E0906 00:03:56.167602 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.45:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:56.173751 kubelet[2318]: I0906 00:03:56.173705 2318 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:03:56.174970 kubelet[2318]: I0906 00:03:56.174938 2318 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:03:56.175242 kubelet[2318]: I0906 00:03:56.175178 2318 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:03:56.176039 kubelet[2318]: I0906 00:03:56.175994 2318 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:03:56.180207 kubelet[2318]: E0906 00:03:56.180168 2318 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-45\" not found" Sep 6 00:03:56.248379 kubelet[2318]: E0906 00:03:56.248236 2318 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.45:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.45:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-45.186288a27a9edb95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-45,UID:ip-172-31-30-45,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-45,},FirstTimestamp:2025-09-06 00:03:56.051225493 +0000 UTC m=+1.470541372,LastTimestamp:2025-09-06 00:03:56.051225493 +0000 UTC m=+1.470541372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-45,}" Sep 6 00:03:56.268790 systemd[1]: Created slice kubepods-burstable-pod8795eb93a1f5a6d64b3fb5cc8316e2e1.slice. Sep 6 00:03:56.277900 kubelet[2318]: I0906 00:03:56.277853 2318 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-45" Sep 6 00:03:56.279527 kubelet[2318]: E0906 00:03:56.279484 2318 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.45:6443/api/v1/nodes\": dial tcp 172.31.30.45:6443: connect: connection refused" node="ip-172-31-30-45" Sep 6 00:03:56.283263 kubelet[2318]: I0906 00:03:56.283222 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/250db1d8b6bd2fa37ecd557ba2bc0f0b-ca-certs\") pod \"kube-apiserver-ip-172-31-30-45\" (UID: \"250db1d8b6bd2fa37ecd557ba2bc0f0b\") " pod="kube-system/kube-apiserver-ip-172-31-30-45" Sep 6 00:03:56.283580 kubelet[2318]: I0906 00:03:56.283552 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2d14409c020e099bafd5da476f54502-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-45\" (UID: \"c2d14409c020e099bafd5da476f54502\") " pod="kube-system/kube-controller-manager-ip-172-31-30-45" Sep 6 00:03:56.283735 kubelet[2318]: I0906 00:03:56.283709 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c2d14409c020e099bafd5da476f54502-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-45\" (UID: \"c2d14409c020e099bafd5da476f54502\") " pod="kube-system/kube-controller-manager-ip-172-31-30-45" Sep 6 00:03:56.283876 kubelet[2318]: I0906 00:03:56.283849 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/250db1d8b6bd2fa37ecd557ba2bc0f0b-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-45\" (UID: \"250db1d8b6bd2fa37ecd557ba2bc0f0b\") " pod="kube-system/kube-apiserver-ip-172-31-30-45" Sep 6 00:03:56.284027 kubelet[2318]: I0906 00:03:56.283990 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/250db1d8b6bd2fa37ecd557ba2bc0f0b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-45\" (UID: \"250db1d8b6bd2fa37ecd557ba2bc0f0b\") " pod="kube-system/kube-apiserver-ip-172-31-30-45" Sep 6 00:03:56.284121 systemd[1]: Created slice kubepods-burstable-pod250db1d8b6bd2fa37ecd557ba2bc0f0b.slice. Sep 6 00:03:56.285143 kubelet[2318]: I0906 00:03:56.285112 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2d14409c020e099bafd5da476f54502-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-45\" (UID: \"c2d14409c020e099bafd5da476f54502\") " pod="kube-system/kube-controller-manager-ip-172-31-30-45" Sep 6 00:03:56.285299 kubelet[2318]: I0906 00:03:56.285270 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2d14409c020e099bafd5da476f54502-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-45\" (UID: \"c2d14409c020e099bafd5da476f54502\") " pod="kube-system/kube-controller-manager-ip-172-31-30-45" Sep 6 00:03:56.285488 kubelet[2318]: I0906 00:03:56.285435 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2d14409c020e099bafd5da476f54502-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-45\" (UID: \"c2d14409c020e099bafd5da476f54502\") " pod="kube-system/kube-controller-manager-ip-172-31-30-45" Sep 6 00:03:56.285649 kubelet[2318]: I0906 00:03:56.285622 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8795eb93a1f5a6d64b3fb5cc8316e2e1-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-45\" (UID: \"8795eb93a1f5a6d64b3fb5cc8316e2e1\") " pod="kube-system/kube-scheduler-ip-172-31-30-45" Sep 6 00:03:56.286264 kubelet[2318]: E0906 00:03:56.286216 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-45?timeout=10s\": dial tcp 172.31.30.45:6443: connect: connection refused" interval="400ms" Sep 6 00:03:56.303890 systemd[1]: Created slice kubepods-burstable-podc2d14409c020e099bafd5da476f54502.slice. Sep 6 00:03:56.482515 kubelet[2318]: I0906 00:03:56.482338 2318 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-45" Sep 6 00:03:56.484382 kubelet[2318]: E0906 00:03:56.484316 2318 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.45:6443/api/v1/nodes\": dial tcp 172.31.30.45:6443: connect: connection refused" node="ip-172-31-30-45" Sep 6 00:03:56.583534 env[1827]: time="2025-09-06T00:03:56.582913577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-45,Uid:8795eb93a1f5a6d64b3fb5cc8316e2e1,Namespace:kube-system,Attempt:0,}" Sep 6 00:03:56.602929 env[1827]: time="2025-09-06T00:03:56.602845564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-45,Uid:250db1d8b6bd2fa37ecd557ba2bc0f0b,Namespace:kube-system,Attempt:0,}" Sep 6 00:03:56.613339 env[1827]: time="2025-09-06T00:03:56.613258635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-45,Uid:c2d14409c020e099bafd5da476f54502,Namespace:kube-system,Attempt:0,}" Sep 6 00:03:56.687319 kubelet[2318]: E0906 00:03:56.687231 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-45?timeout=10s\": dial tcp 172.31.30.45:6443: connect: connection refused" interval="800ms" Sep 6 00:03:56.887456 kubelet[2318]: I0906 00:03:56.887006 2318 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-45" Sep 6 00:03:56.887974 kubelet[2318]: E0906 00:03:56.887899 2318 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.45:6443/api/v1/nodes\": dial tcp 172.31.30.45:6443: connect: connection refused" node="ip-172-31-30-45" Sep 6 00:03:57.138979 kubelet[2318]: W0906 00:03:57.138728 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.45:6443: connect: connection refused Sep 6 00:03:57.138979 kubelet[2318]: E0906 00:03:57.138833 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.45:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:57.164864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount994438355.mount: Deactivated successfully. Sep 6 00:03:57.181838 env[1827]: time="2025-09-06T00:03:57.181751598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.184203 env[1827]: time="2025-09-06T00:03:57.184123083Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.192276 env[1827]: time="2025-09-06T00:03:57.192218871Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.195069 env[1827]: time="2025-09-06T00:03:57.195012007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.198714 env[1827]: time="2025-09-06T00:03:57.198627735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.201052 env[1827]: time="2025-09-06T00:03:57.200954386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.204520 env[1827]: time="2025-09-06T00:03:57.204460855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.206724 env[1827]: time="2025-09-06T00:03:57.206671655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.210777 env[1827]: time="2025-09-06T00:03:57.210711987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.222876 env[1827]: time="2025-09-06T00:03:57.222815985Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.225496 env[1827]: time="2025-09-06T00:03:57.225378800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.231546 env[1827]: time="2025-09-06T00:03:57.231489462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:57.258069 kubelet[2318]: W0906 00:03:57.258005 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.45:6443: connect: connection refused Sep 6 00:03:57.258240 kubelet[2318]: E0906 00:03:57.258084 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.45:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:57.281767 env[1827]: time="2025-09-06T00:03:57.281634056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:03:57.282105 env[1827]: time="2025-09-06T00:03:57.282032179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:03:57.282317 env[1827]: time="2025-09-06T00:03:57.282251046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:03:57.287543 env[1827]: time="2025-09-06T00:03:57.283195792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/536c46ddf0215c1549b6a62f9d92ae1d8eb58792cedb2a0965aabaea1197a66a pid=2358 runtime=io.containerd.runc.v2 Sep 6 00:03:57.326089 systemd[1]: Started cri-containerd-536c46ddf0215c1549b6a62f9d92ae1d8eb58792cedb2a0965aabaea1197a66a.scope. Sep 6 00:03:57.339600 env[1827]: time="2025-09-06T00:03:57.339450784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:03:57.339600 env[1827]: time="2025-09-06T00:03:57.339532399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:03:57.340038 env[1827]: time="2025-09-06T00:03:57.339944073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:03:57.347119 env[1827]: time="2025-09-06T00:03:57.346889525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:03:57.347119 env[1827]: time="2025-09-06T00:03:57.347076894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:03:57.347510 env[1827]: time="2025-09-06T00:03:57.347142916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:03:57.347600 env[1827]: time="2025-09-06T00:03:57.347500825Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92b293072da781d8926770e13c80c062f43a41b4ef5210bccbe69b0d419c3337 pid=2400 runtime=io.containerd.runc.v2 Sep 6 00:03:57.349621 env[1827]: time="2025-09-06T00:03:57.343582225Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79659272e9e06ba17b69733adb07f49c227424b1431a584797b1b6b31c2c7e39 pid=2383 runtime=io.containerd.runc.v2 Sep 6 00:03:57.376745 systemd[1]: Started cri-containerd-79659272e9e06ba17b69733adb07f49c227424b1431a584797b1b6b31c2c7e39.scope. Sep 6 00:03:57.388326 kubelet[2318]: W0906 00:03:57.388172 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-45&limit=500&resourceVersion=0": dial tcp 172.31.30.45:6443: connect: connection refused Sep 6 00:03:57.388326 kubelet[2318]: E0906 00:03:57.388271 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-45&limit=500&resourceVersion=0\": dial tcp 172.31.30.45:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:57.415411 systemd[1]: Started cri-containerd-92b293072da781d8926770e13c80c062f43a41b4ef5210bccbe69b0d419c3337.scope. Sep 6 00:03:57.418835 kubelet[2318]: W0906 00:03:57.416174 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.45:6443: connect: connection refused Sep 6 00:03:57.418835 kubelet[2318]: E0906 00:03:57.416367 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.45:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:57.489325 kubelet[2318]: E0906 00:03:57.489255 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-45?timeout=10s\": dial tcp 172.31.30.45:6443: connect: connection refused" interval="1.6s" Sep 6 00:03:57.537218 env[1827]: time="2025-09-06T00:03:57.537145034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-45,Uid:8795eb93a1f5a6d64b3fb5cc8316e2e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"536c46ddf0215c1549b6a62f9d92ae1d8eb58792cedb2a0965aabaea1197a66a\"" Sep 6 00:03:57.543943 env[1827]: time="2025-09-06T00:03:57.543889156Z" level=info msg="CreateContainer within sandbox \"536c46ddf0215c1549b6a62f9d92ae1d8eb58792cedb2a0965aabaea1197a66a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:03:57.558555 env[1827]: time="2025-09-06T00:03:57.558496946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-45,Uid:c2d14409c020e099bafd5da476f54502,Namespace:kube-system,Attempt:0,} returns sandbox id \"79659272e9e06ba17b69733adb07f49c227424b1431a584797b1b6b31c2c7e39\"" Sep 6 00:03:57.568052 env[1827]: time="2025-09-06T00:03:57.565025393Z" level=info msg="CreateContainer within sandbox \"79659272e9e06ba17b69733adb07f49c227424b1431a584797b1b6b31c2c7e39\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:03:57.572452 env[1827]: time="2025-09-06T00:03:57.572345127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-45,Uid:250db1d8b6bd2fa37ecd557ba2bc0f0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"92b293072da781d8926770e13c80c062f43a41b4ef5210bccbe69b0d419c3337\"" Sep 6 00:03:57.579963 env[1827]: time="2025-09-06T00:03:57.579908733Z" level=info msg="CreateContainer within sandbox \"92b293072da781d8926770e13c80c062f43a41b4ef5210bccbe69b0d419c3337\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:03:57.586339 env[1827]: time="2025-09-06T00:03:57.586276688Z" level=info msg="CreateContainer within sandbox \"536c46ddf0215c1549b6a62f9d92ae1d8eb58792cedb2a0965aabaea1197a66a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8fa86948ea98d33cec634b9287de65902486550ee29656e77276b2b3cf5f775c\"" Sep 6 00:03:57.588174 env[1827]: time="2025-09-06T00:03:57.588124357Z" level=info msg="StartContainer for \"8fa86948ea98d33cec634b9287de65902486550ee29656e77276b2b3cf5f775c\"" Sep 6 00:03:57.609788 env[1827]: time="2025-09-06T00:03:57.609723298Z" level=info msg="CreateContainer within sandbox \"79659272e9e06ba17b69733adb07f49c227424b1431a584797b1b6b31c2c7e39\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0dbb58dc21467361fbea1bfa0bc4438f0c345c4bbb9102654ce7e6e736cec374\"" Sep 6 00:03:57.611226 env[1827]: time="2025-09-06T00:03:57.611156604Z" level=info msg="StartContainer for \"0dbb58dc21467361fbea1bfa0bc4438f0c345c4bbb9102654ce7e6e736cec374\"" Sep 6 00:03:57.627219 env[1827]: time="2025-09-06T00:03:57.627155122Z" level=info msg="CreateContainer within sandbox \"92b293072da781d8926770e13c80c062f43a41b4ef5210bccbe69b0d419c3337\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3124fdd270535d395ebb7264178d19a953a601ff650de37945e755598e2f2a6a\"" Sep 6 00:03:57.630741 systemd[1]: Started cri-containerd-8fa86948ea98d33cec634b9287de65902486550ee29656e77276b2b3cf5f775c.scope. Sep 6 00:03:57.639717 env[1827]: time="2025-09-06T00:03:57.637517505Z" level=info msg="StartContainer for \"3124fdd270535d395ebb7264178d19a953a601ff650de37945e755598e2f2a6a\"" Sep 6 00:03:57.669483 systemd[1]: Started cri-containerd-0dbb58dc21467361fbea1bfa0bc4438f0c345c4bbb9102654ce7e6e736cec374.scope. Sep 6 00:03:57.690470 kubelet[2318]: I0906 00:03:57.690382 2318 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-45" Sep 6 00:03:57.691337 kubelet[2318]: E0906 00:03:57.691276 2318 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.45:6443/api/v1/nodes\": dial tcp 172.31.30.45:6443: connect: connection refused" node="ip-172-31-30-45" Sep 6 00:03:57.710997 systemd[1]: Started cri-containerd-3124fdd270535d395ebb7264178d19a953a601ff650de37945e755598e2f2a6a.scope. Sep 6 00:03:57.816322 env[1827]: time="2025-09-06T00:03:57.816261170Z" level=info msg="StartContainer for \"8fa86948ea98d33cec634b9287de65902486550ee29656e77276b2b3cf5f775c\" returns successfully" Sep 6 00:03:57.823446 env[1827]: time="2025-09-06T00:03:57.823319975Z" level=info msg="StartContainer for \"0dbb58dc21467361fbea1bfa0bc4438f0c345c4bbb9102654ce7e6e736cec374\" returns successfully" Sep 6 00:03:57.861701 env[1827]: time="2025-09-06T00:03:57.861625822Z" level=info msg="StartContainer for \"3124fdd270535d395ebb7264178d19a953a601ff650de37945e755598e2f2a6a\" returns successfully" Sep 6 00:03:58.018783 kubelet[2318]: E0906 00:03:58.018533 2318 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.45:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:59.294290 kubelet[2318]: I0906 00:03:59.294237 2318 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-45" Sep 6 00:03:59.299671 amazon-ssm-agent[1800]: 2025-09-06 00:03:59 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 6 00:04:02.216658 kubelet[2318]: E0906 00:04:02.216593 2318 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-45\" not found" node="ip-172-31-30-45" Sep 6 00:04:02.289983 kubelet[2318]: I0906 00:04:02.289920 2318 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-30-45" Sep 6 00:04:02.290170 kubelet[2318]: E0906 00:04:02.290032 2318 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-30-45\": node \"ip-172-31-30-45\" not found" Sep 6 00:04:03.043625 kubelet[2318]: I0906 00:04:03.043578 2318 apiserver.go:52] "Watching apiserver" Sep 6 00:04:03.079334 kubelet[2318]: I0906 00:04:03.079256 2318 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:04:04.409467 update_engine[1813]: I0906 00:04:04.408866 1813 update_attempter.cc:509] Updating boot flags... Sep 6 00:04:04.624881 systemd[1]: Reloading. Sep 6 00:04:04.864738 /usr/lib/systemd/system-generators/torcx-generator[2715]: time="2025-09-06T00:04:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:04:04.864799 /usr/lib/systemd/system-generators/torcx-generator[2715]: time="2025-09-06T00:04:04Z" level=info msg="torcx already run" Sep 6 00:04:05.193710 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:04:05.193752 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:04:05.238476 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:04:05.610063 systemd[1]: Stopping kubelet.service... Sep 6 00:04:05.631453 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:04:05.631864 systemd[1]: Stopped kubelet.service. Sep 6 00:04:05.631942 systemd[1]: kubelet.service: Consumed 2.215s CPU time. Sep 6 00:04:05.635994 systemd[1]: Starting kubelet.service... Sep 6 00:04:05.997794 systemd[1]: Started kubelet.service. Sep 6 00:04:06.126864 kubelet[2856]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:04:06.127438 kubelet[2856]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:04:06.127576 kubelet[2856]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:04:06.128189 kubelet[2856]: I0906 00:04:06.128109 2856 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:04:06.148472 kubelet[2856]: I0906 00:04:06.148385 2856 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:04:06.148706 kubelet[2856]: I0906 00:04:06.148682 2856 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:04:06.149345 kubelet[2856]: I0906 00:04:06.149296 2856 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:04:06.155422 kubelet[2856]: I0906 00:04:06.155350 2856 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:04:06.159953 kubelet[2856]: I0906 00:04:06.159907 2856 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:04:06.171196 kubelet[2856]: E0906 00:04:06.171145 2856 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:04:06.171444 kubelet[2856]: I0906 00:04:06.171415 2856 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:04:06.176280 kubelet[2856]: I0906 00:04:06.176245 2856 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:04:06.176780 kubelet[2856]: I0906 00:04:06.176749 2856 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:04:06.177415 kubelet[2856]: I0906 00:04:06.177342 2856 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:04:06.177902 kubelet[2856]: I0906 00:04:06.177614 2856 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-45","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:04:06.178143 kubelet[2856]: I0906 00:04:06.178117 2856 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:04:06.178274 kubelet[2856]: I0906 00:04:06.178253 2856 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:04:06.178551 kubelet[2856]: I0906 00:04:06.178529 2856 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:04:06.178893 kubelet[2856]: I0906 00:04:06.178869 2856 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:04:06.179053 kubelet[2856]: I0906 00:04:06.179031 2856 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:04:06.179196 kubelet[2856]: I0906 00:04:06.179175 2856 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:04:06.179369 kubelet[2856]: I0906 00:04:06.179347 2856 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:04:06.185326 kubelet[2856]: I0906 00:04:06.185285 2856 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:04:06.189281 kubelet[2856]: I0906 00:04:06.189235 2856 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:04:06.191099 kubelet[2856]: I0906 00:04:06.191053 2856 server.go:1274] "Started kubelet" Sep 6 00:04:06.196869 kubelet[2856]: I0906 00:04:06.196828 2856 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:04:06.203260 sudo[2871]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:04:06.203866 sudo[2871]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:04:06.218629 kubelet[2856]: I0906 00:04:06.218539 2856 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:04:06.220477 kubelet[2856]: I0906 00:04:06.220435 2856 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:04:06.226418 kubelet[2856]: I0906 00:04:06.224321 2856 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:04:06.226971 kubelet[2856]: I0906 00:04:06.226933 2856 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:04:06.227552 kubelet[2856]: I0906 00:04:06.227514 2856 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:04:06.235926 kubelet[2856]: I0906 00:04:06.235887 2856 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:04:06.238569 kubelet[2856]: E0906 00:04:06.238522 2856 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-45\" not found" Sep 6 00:04:06.263006 kubelet[2856]: I0906 00:04:06.262867 2856 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:04:06.281587 kubelet[2856]: I0906 00:04:06.265273 2856 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:04:06.286432 kubelet[2856]: I0906 00:04:06.282662 2856 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:04:06.286432 kubelet[2856]: I0906 00:04:06.283022 2856 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:04:06.302926 kubelet[2856]: E0906 00:04:06.302857 2856 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:04:06.303731 kubelet[2856]: I0906 00:04:06.303693 2856 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:04:06.313778 kubelet[2856]: I0906 00:04:06.311762 2856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:04:06.333889 kubelet[2856]: I0906 00:04:06.330343 2856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:04:06.333889 kubelet[2856]: I0906 00:04:06.330471 2856 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:04:06.333889 kubelet[2856]: I0906 00:04:06.330506 2856 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:04:06.333889 kubelet[2856]: E0906 00:04:06.330820 2856 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:04:06.432274 kubelet[2856]: E0906 00:04:06.431564 2856 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:04:06.476669 kubelet[2856]: I0906 00:04:06.476620 2856 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:04:06.476669 kubelet[2856]: I0906 00:04:06.476656 2856 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:04:06.476916 kubelet[2856]: I0906 00:04:06.476694 2856 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:04:06.476991 kubelet[2856]: I0906 00:04:06.476955 2856 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:04:06.477065 kubelet[2856]: I0906 00:04:06.476979 2856 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:04:06.477065 kubelet[2856]: I0906 00:04:06.477018 2856 policy_none.go:49] "None policy: Start" Sep 6 00:04:06.478735 kubelet[2856]: I0906 00:04:06.478690 2856 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:04:06.478942 kubelet[2856]: I0906 00:04:06.478905 2856 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:04:06.479276 kubelet[2856]: I0906 00:04:06.479224 2856 state_mem.go:75] "Updated machine memory state" Sep 6 00:04:06.491338 kubelet[2856]: I0906 00:04:06.491276 2856 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:04:06.492017 kubelet[2856]: I0906 00:04:06.491968 2856 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:04:06.492144 kubelet[2856]: I0906 00:04:06.492008 2856 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:04:06.494641 kubelet[2856]: I0906 00:04:06.494592 2856 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:04:06.613313 kubelet[2856]: I0906 00:04:06.613276 2856 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-45" Sep 6 00:04:06.653035 kubelet[2856]: E0906 00:04:06.652960 2856 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-30-45\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-45" Sep 6 00:04:06.653354 kubelet[2856]: I0906 00:04:06.653300 2856 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-30-45" Sep 6 00:04:06.653678 kubelet[2856]: I0906 00:04:06.653652 2856 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-30-45" Sep 6 00:04:06.666075 kubelet[2856]: E0906 00:04:06.666017 2856 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-45\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-45" Sep 6 00:04:06.692329 kubelet[2856]: I0906 00:04:06.692195 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2d14409c020e099bafd5da476f54502-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-45\" (UID: \"c2d14409c020e099bafd5da476f54502\") " pod="kube-system/kube-controller-manager-ip-172-31-30-45" Sep 6 00:04:06.692526 kubelet[2856]: I0906 00:04:06.692379 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8795eb93a1f5a6d64b3fb5cc8316e2e1-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-45\" (UID: \"8795eb93a1f5a6d64b3fb5cc8316e2e1\") " pod="kube-system/kube-scheduler-ip-172-31-30-45" Sep 6 00:04:06.692603 kubelet[2856]: I0906 00:04:06.692551 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/250db1d8b6bd2fa37ecd557ba2bc0f0b-ca-certs\") pod \"kube-apiserver-ip-172-31-30-45\" (UID: \"250db1d8b6bd2fa37ecd557ba2bc0f0b\") " pod="kube-system/kube-apiserver-ip-172-31-30-45" Sep 6 00:04:06.692684 kubelet[2856]: I0906 00:04:06.692629 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/250db1d8b6bd2fa37ecd557ba2bc0f0b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-45\" (UID: \"250db1d8b6bd2fa37ecd557ba2bc0f0b\") " pod="kube-system/kube-apiserver-ip-172-31-30-45" Sep 6 00:04:06.692831 kubelet[2856]: I0906 00:04:06.692791 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2d14409c020e099bafd5da476f54502-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-45\" (UID: \"c2d14409c020e099bafd5da476f54502\") " pod="kube-system/kube-controller-manager-ip-172-31-30-45" Sep 6 00:04:06.692915 kubelet[2856]: I0906 00:04:06.692889 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2d14409c020e099bafd5da476f54502-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-45\" (UID: \"c2d14409c020e099bafd5da476f54502\") " pod="kube-system/kube-controller-manager-ip-172-31-30-45" Sep 6 00:04:06.693032 kubelet[2856]: I0906 00:04:06.692991 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/250db1d8b6bd2fa37ecd557ba2bc0f0b-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-45\" (UID: \"250db1d8b6bd2fa37ecd557ba2bc0f0b\") " pod="kube-system/kube-apiserver-ip-172-31-30-45" Sep 6 00:04:06.693175 kubelet[2856]: I0906 00:04:06.693093 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2d14409c020e099bafd5da476f54502-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-45\" (UID: \"c2d14409c020e099bafd5da476f54502\") " pod="kube-system/kube-controller-manager-ip-172-31-30-45" Sep 6 00:04:06.693258 kubelet[2856]: I0906 00:04:06.693226 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c2d14409c020e099bafd5da476f54502-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-45\" (UID: \"c2d14409c020e099bafd5da476f54502\") " pod="kube-system/kube-controller-manager-ip-172-31-30-45" Sep 6 00:04:07.181968 kubelet[2856]: I0906 00:04:07.181922 2856 apiserver.go:52] "Watching apiserver" Sep 6 00:04:07.275138 kubelet[2856]: I0906 00:04:07.275096 2856 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:04:07.312340 sudo[2871]: pam_unix(sudo:session): session closed for user root Sep 6 00:04:07.450752 kubelet[2856]: I0906 00:04:07.449891 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-45" podStartSLOduration=3.449867854 podStartE2EDuration="3.449867854s" podCreationTimestamp="2025-09-06 00:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:07.434341374 +0000 UTC m=+1.426553699" watchObservedRunningTime="2025-09-06 00:04:07.449867854 +0000 UTC m=+1.442080167" Sep 6 00:04:07.469222 kubelet[2856]: I0906 00:04:07.469132 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-45" podStartSLOduration=4.469109748 podStartE2EDuration="4.469109748s" podCreationTimestamp="2025-09-06 00:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:07.452126585 +0000 UTC m=+1.444338910" watchObservedRunningTime="2025-09-06 00:04:07.469109748 +0000 UTC m=+1.461322145" Sep 6 00:04:07.512429 kubelet[2856]: I0906 00:04:07.512327 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-45" podStartSLOduration=1.51230559 podStartE2EDuration="1.51230559s" podCreationTimestamp="2025-09-06 00:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:07.473536383 +0000 UTC m=+1.465748720" watchObservedRunningTime="2025-09-06 00:04:07.51230559 +0000 UTC m=+1.504517891" Sep 6 00:04:08.863296 kubelet[2856]: I0906 00:04:08.863019 2856 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:04:08.864095 env[1827]: time="2025-09-06T00:04:08.864011245Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:04:08.865169 kubelet[2856]: I0906 00:04:08.865104 2856 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:04:09.511569 systemd[1]: Created slice kubepods-besteffort-pod23dd3403_4acc_4787_b854_57fb21fb83fd.slice. Sep 6 00:04:09.623665 kubelet[2856]: I0906 00:04:09.623580 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23dd3403-4acc-4787-b854-57fb21fb83fd-kube-proxy\") pod \"kube-proxy-d8ztf\" (UID: \"23dd3403-4acc-4787-b854-57fb21fb83fd\") " pod="kube-system/kube-proxy-d8ztf" Sep 6 00:04:09.623854 kubelet[2856]: I0906 00:04:09.623669 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23dd3403-4acc-4787-b854-57fb21fb83fd-xtables-lock\") pod \"kube-proxy-d8ztf\" (UID: \"23dd3403-4acc-4787-b854-57fb21fb83fd\") " pod="kube-system/kube-proxy-d8ztf" Sep 6 00:04:09.623854 kubelet[2856]: I0906 00:04:09.623746 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c67dk\" (UniqueName: \"kubernetes.io/projected/23dd3403-4acc-4787-b854-57fb21fb83fd-kube-api-access-c67dk\") pod \"kube-proxy-d8ztf\" (UID: \"23dd3403-4acc-4787-b854-57fb21fb83fd\") " pod="kube-system/kube-proxy-d8ztf" Sep 6 00:04:09.623854 kubelet[2856]: I0906 00:04:09.623822 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23dd3403-4acc-4787-b854-57fb21fb83fd-lib-modules\") pod \"kube-proxy-d8ztf\" (UID: \"23dd3403-4acc-4787-b854-57fb21fb83fd\") " pod="kube-system/kube-proxy-d8ztf" Sep 6 00:04:09.753071 kubelet[2856]: I0906 00:04:09.752977 2856 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:04:09.827846 env[1827]: time="2025-09-06T00:04:09.827647416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d8ztf,Uid:23dd3403-4acc-4787-b854-57fb21fb83fd,Namespace:kube-system,Attempt:0,}" Sep 6 00:04:09.882082 env[1827]: time="2025-09-06T00:04:09.879569811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:09.882082 env[1827]: time="2025-09-06T00:04:09.879663914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:09.882082 env[1827]: time="2025-09-06T00:04:09.879692285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:09.882082 env[1827]: time="2025-09-06T00:04:09.880057402Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ea738820d90e8a1f5604ba5121bb255e4aec6987f553348f32fcdec486591ba pid=2904 runtime=io.containerd.runc.v2 Sep 6 00:04:09.946125 systemd[1]: run-containerd-runc-k8s.io-0ea738820d90e8a1f5604ba5121bb255e4aec6987f553348f32fcdec486591ba-runc.hIRlRt.mount: Deactivated successfully. Sep 6 00:04:09.966848 systemd[1]: Started cri-containerd-0ea738820d90e8a1f5604ba5121bb255e4aec6987f553348f32fcdec486591ba.scope. Sep 6 00:04:10.096068 systemd[1]: Created slice kubepods-burstable-pod840711c9_46ec_4cfc_892d_6055289a4794.slice. Sep 6 00:04:10.128979 systemd[1]: Created slice kubepods-besteffort-pod98b9ef45_5a24_41ed_a7fe_c8f78c4523be.slice. Sep 6 00:04:10.130360 kubelet[2856]: I0906 00:04:10.130299 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-bpf-maps\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.130945 kubelet[2856]: I0906 00:04:10.130431 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-lib-modules\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.130945 kubelet[2856]: I0906 00:04:10.130480 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-cilium-cgroup\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.130945 kubelet[2856]: I0906 00:04:10.130548 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-cni-path\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.130945 kubelet[2856]: I0906 00:04:10.130615 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-etc-cni-netd\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.130945 kubelet[2856]: I0906 00:04:10.130661 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/840711c9-46ec-4cfc-892d-6055289a4794-hubble-tls\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.130945 kubelet[2856]: I0906 00:04:10.130752 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s529m\" (UniqueName: \"kubernetes.io/projected/840711c9-46ec-4cfc-892d-6055289a4794-kube-api-access-s529m\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.131422 kubelet[2856]: I0906 00:04:10.130827 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/840711c9-46ec-4cfc-892d-6055289a4794-clustermesh-secrets\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.131422 kubelet[2856]: I0906 00:04:10.130911 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-cilium-run\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.131422 kubelet[2856]: I0906 00:04:10.130993 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-hostproc\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.131422 kubelet[2856]: I0906 00:04:10.131083 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/840711c9-46ec-4cfc-892d-6055289a4794-cilium-config-path\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.131422 kubelet[2856]: I0906 00:04:10.131131 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-xtables-lock\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.131422 kubelet[2856]: I0906 00:04:10.131200 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-host-proc-sys-net\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.131825 kubelet[2856]: I0906 00:04:10.131286 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-host-proc-sys-kernel\") pod \"cilium-bmgmx\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " pod="kube-system/cilium-bmgmx" Sep 6 00:04:10.222229 env[1827]: time="2025-09-06T00:04:10.222152830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d8ztf,Uid:23dd3403-4acc-4787-b854-57fb21fb83fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ea738820d90e8a1f5604ba5121bb255e4aec6987f553348f32fcdec486591ba\"" Sep 6 00:04:10.231180 env[1827]: time="2025-09-06T00:04:10.231108163Z" level=info msg="CreateContainer within sandbox \"0ea738820d90e8a1f5604ba5121bb255e4aec6987f553348f32fcdec486591ba\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:04:10.231871 kubelet[2856]: I0906 00:04:10.231786 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dsw5\" (UniqueName: \"kubernetes.io/projected/98b9ef45-5a24-41ed-a7fe-c8f78c4523be-kube-api-access-5dsw5\") pod \"cilium-operator-5d85765b45-df5tq\" (UID: \"98b9ef45-5a24-41ed-a7fe-c8f78c4523be\") " pod="kube-system/cilium-operator-5d85765b45-df5tq" Sep 6 00:04:10.232235 kubelet[2856]: I0906 00:04:10.232173 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98b9ef45-5a24-41ed-a7fe-c8f78c4523be-cilium-config-path\") pod \"cilium-operator-5d85765b45-df5tq\" (UID: \"98b9ef45-5a24-41ed-a7fe-c8f78c4523be\") " pod="kube-system/cilium-operator-5d85765b45-df5tq" Sep 6 00:04:10.272343 env[1827]: time="2025-09-06T00:04:10.272269366Z" level=info msg="CreateContainer within sandbox \"0ea738820d90e8a1f5604ba5121bb255e4aec6987f553348f32fcdec486591ba\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"51ad8c4964629ed3fc3683324758f4b854e6a3dd08357d3de4db40642956ae48\"" Sep 6 00:04:10.276311 env[1827]: time="2025-09-06T00:04:10.276246891Z" level=info msg="StartContainer for \"51ad8c4964629ed3fc3683324758f4b854e6a3dd08357d3de4db40642956ae48\"" Sep 6 00:04:10.321383 systemd[1]: Started cri-containerd-51ad8c4964629ed3fc3683324758f4b854e6a3dd08357d3de4db40642956ae48.scope. Sep 6 00:04:10.407226 env[1827]: time="2025-09-06T00:04:10.406189616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmgmx,Uid:840711c9-46ec-4cfc-892d-6055289a4794,Namespace:kube-system,Attempt:0,}" Sep 6 00:04:10.437783 env[1827]: time="2025-09-06T00:04:10.437708642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-df5tq,Uid:98b9ef45-5a24-41ed-a7fe-c8f78c4523be,Namespace:kube-system,Attempt:0,}" Sep 6 00:04:10.438795 env[1827]: time="2025-09-06T00:04:10.438721527Z" level=info msg="StartContainer for \"51ad8c4964629ed3fc3683324758f4b854e6a3dd08357d3de4db40642956ae48\" returns successfully" Sep 6 00:04:10.449028 env[1827]: time="2025-09-06T00:04:10.448820761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:10.449028 env[1827]: time="2025-09-06T00:04:10.448903666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:10.449028 env[1827]: time="2025-09-06T00:04:10.448931233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:10.449950 env[1827]: time="2025-09-06T00:04:10.449843654Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372 pid=2980 runtime=io.containerd.runc.v2 Sep 6 00:04:10.505868 systemd[1]: Started cri-containerd-a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372.scope. Sep 6 00:04:10.515518 env[1827]: time="2025-09-06T00:04:10.513816030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:10.515518 env[1827]: time="2025-09-06T00:04:10.513915845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:10.515518 env[1827]: time="2025-09-06T00:04:10.513946088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:10.515518 env[1827]: time="2025-09-06T00:04:10.514308959Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ba2a82c8e9583bd554cf810f0b2538c6dbe89f3d3f12d7d1210c4e94141cd93 pid=3009 runtime=io.containerd.runc.v2 Sep 6 00:04:10.544537 systemd[1]: Started cri-containerd-4ba2a82c8e9583bd554cf810f0b2538c6dbe89f3d3f12d7d1210c4e94141cd93.scope. Sep 6 00:04:10.608755 env[1827]: time="2025-09-06T00:04:10.608693435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmgmx,Uid:840711c9-46ec-4cfc-892d-6055289a4794,Namespace:kube-system,Attempt:0,} returns sandbox id \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\"" Sep 6 00:04:10.612753 env[1827]: time="2025-09-06T00:04:10.612699139Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:04:10.703722 env[1827]: time="2025-09-06T00:04:10.703565931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-df5tq,Uid:98b9ef45-5a24-41ed-a7fe-c8f78c4523be,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ba2a82c8e9583bd554cf810f0b2538c6dbe89f3d3f12d7d1210c4e94141cd93\"" Sep 6 00:04:11.440939 kubelet[2856]: I0906 00:04:11.440834 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d8ztf" podStartSLOduration=2.440810878 podStartE2EDuration="2.440810878s" podCreationTimestamp="2025-09-06 00:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:11.440808274 +0000 UTC m=+5.433020575" watchObservedRunningTime="2025-09-06 00:04:11.440810878 +0000 UTC m=+5.433023179" Sep 6 00:04:17.483336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363267027.mount: Deactivated successfully. Sep 6 00:04:21.741444 env[1827]: time="2025-09-06T00:04:21.741343314Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:21.744245 env[1827]: time="2025-09-06T00:04:21.744174747Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:21.747242 env[1827]: time="2025-09-06T00:04:21.747183011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:21.748820 env[1827]: time="2025-09-06T00:04:21.748744236Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 6 00:04:21.755865 env[1827]: time="2025-09-06T00:04:21.755802117Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:04:21.759369 env[1827]: time="2025-09-06T00:04:21.759316380Z" level=info msg="CreateContainer within sandbox \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:04:21.779259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount773083080.mount: Deactivated successfully. Sep 6 00:04:21.798332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount882995799.mount: Deactivated successfully. Sep 6 00:04:21.804346 env[1827]: time="2025-09-06T00:04:21.804257734Z" level=info msg="CreateContainer within sandbox \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4\"" Sep 6 00:04:21.806872 env[1827]: time="2025-09-06T00:04:21.806817170Z" level=info msg="StartContainer for \"d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4\"" Sep 6 00:04:21.849342 systemd[1]: Started cri-containerd-d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4.scope. Sep 6 00:04:21.889653 systemd[1]: cri-containerd-d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4.scope: Deactivated successfully. Sep 6 00:04:22.772450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4-rootfs.mount: Deactivated successfully. Sep 6 00:04:22.878334 env[1827]: time="2025-09-06T00:04:22.877849006Z" level=info msg="shim disconnected" id=d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4 Sep 6 00:04:22.878334 env[1827]: time="2025-09-06T00:04:22.878006539Z" level=warning msg="cleaning up after shim disconnected" id=d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4 namespace=k8s.io Sep 6 00:04:22.878334 env[1827]: time="2025-09-06T00:04:22.878037357Z" level=info msg="cleaning up dead shim" Sep 6 00:04:22.894695 env[1827]: time="2025-09-06T00:04:22.894598262Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3218 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:04:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:04:22.895273 env[1827]: time="2025-09-06T00:04:22.895097180Z" level=error msg="copy shim log" error="read /proc/self/fd/50: file already closed" Sep 6 00:04:22.897602 env[1827]: time="2025-09-06T00:04:22.897498104Z" level=error msg="Failed to pipe stderr of container \"d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4\"" error="reading from a closed fifo" Sep 6 00:04:22.897602 env[1827]: time="2025-09-06T00:04:22.897503120Z" level=error msg="Failed to pipe stdout of container \"d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4\"" error="reading from a closed fifo" Sep 6 00:04:22.900562 env[1827]: time="2025-09-06T00:04:22.900257140Z" level=error msg="StartContainer for \"d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:04:22.900885 kubelet[2856]: E0906 00:04:22.900759 2856 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4" Sep 6 00:04:22.902632 kubelet[2856]: E0906 00:04:22.901008 2856 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 6 00:04:22.902632 kubelet[2856]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:04:22.902632 kubelet[2856]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:04:22.902632 kubelet[2856]: rm /hostbin/cilium-mount Sep 6 00:04:22.903292 kubelet[2856]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s529m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:04:22.903292 kubelet[2856]: > logger="UnhandledError" Sep 6 00:04:22.904564 kubelet[2856]: E0906 00:04:22.904439 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:04:23.500106 env[1827]: time="2025-09-06T00:04:23.500042145Z" level=info msg="CreateContainer within sandbox \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Sep 6 00:04:23.549652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount480816292.mount: Deactivated successfully. Sep 6 00:04:23.563639 env[1827]: time="2025-09-06T00:04:23.563512422Z" level=info msg="CreateContainer within sandbox \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163\"" Sep 6 00:04:23.564729 env[1827]: time="2025-09-06T00:04:23.564652031Z" level=info msg="StartContainer for \"92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163\"" Sep 6 00:04:23.658982 systemd[1]: Started cri-containerd-92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163.scope. Sep 6 00:04:23.705785 systemd[1]: cri-containerd-92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163.scope: Deactivated successfully. Sep 6 00:04:23.753913 env[1827]: time="2025-09-06T00:04:23.753734337Z" level=info msg="shim disconnected" id=92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163 Sep 6 00:04:23.754792 env[1827]: time="2025-09-06T00:04:23.754727705Z" level=warning msg="cleaning up after shim disconnected" id=92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163 namespace=k8s.io Sep 6 00:04:23.754792 env[1827]: time="2025-09-06T00:04:23.754780640Z" level=info msg="cleaning up dead shim" Sep 6 00:04:23.772847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2530701163.mount: Deactivated successfully. Sep 6 00:04:23.773085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163-rootfs.mount: Deactivated successfully. Sep 6 00:04:23.791473 env[1827]: time="2025-09-06T00:04:23.791240803Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3257 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:04:23Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:04:23.792009 env[1827]: time="2025-09-06T00:04:23.791903025Z" level=error msg="copy shim log" error="read /proc/self/fd/102: file already closed" Sep 6 00:04:23.793640 env[1827]: time="2025-09-06T00:04:23.793555700Z" level=error msg="Failed to pipe stderr of container \"92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163\"" error="reading from a closed fifo" Sep 6 00:04:23.794057 env[1827]: time="2025-09-06T00:04:23.793996077Z" level=error msg="Failed to pipe stdout of container \"92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163\"" error="reading from a closed fifo" Sep 6 00:04:23.796562 env[1827]: time="2025-09-06T00:04:23.796413579Z" level=error msg="StartContainer for \"92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:04:23.797544 kubelet[2856]: E0906 00:04:23.797105 2856 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163" Sep 6 00:04:23.797544 kubelet[2856]: E0906 00:04:23.797383 2856 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 6 00:04:23.797544 kubelet[2856]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:04:23.797544 kubelet[2856]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:04:23.797544 kubelet[2856]: rm /hostbin/cilium-mount Sep 6 00:04:23.797544 kubelet[2856]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s529m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:04:23.797544 kubelet[2856]: > logger="UnhandledError" Sep 6 00:04:23.799148 kubelet[2856]: E0906 00:04:23.798861 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:04:24.503937 kubelet[2856]: I0906 00:04:24.503900 2856 scope.go:117] "RemoveContainer" containerID="d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4" Sep 6 00:04:24.505064 kubelet[2856]: I0906 00:04:24.504884 2856 scope.go:117] "RemoveContainer" containerID="d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4" Sep 6 00:04:24.525144 env[1827]: time="2025-09-06T00:04:24.525087640Z" level=info msg="RemoveContainer for \"d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4\"" Sep 6 00:04:24.529936 env[1827]: time="2025-09-06T00:04:24.529879443Z" level=info msg="RemoveContainer for \"d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4\" returns successfully" Sep 6 00:04:24.533119 kubelet[2856]: E0906 00:04:24.532411 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:04:24.542960 env[1827]: time="2025-09-06T00:04:24.541240934Z" level=info msg="RemoveContainer for \"d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4\"" Sep 6 00:04:24.542960 env[1827]: time="2025-09-06T00:04:24.541313826Z" level=info msg="RemoveContainer for \"d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4\" returns successfully" Sep 6 00:04:24.661031 env[1827]: time="2025-09-06T00:04:24.660965012Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:24.664543 env[1827]: time="2025-09-06T00:04:24.664476260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:24.667678 env[1827]: time="2025-09-06T00:04:24.667607880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:24.668993 env[1827]: time="2025-09-06T00:04:24.668905716Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 6 00:04:24.677873 env[1827]: time="2025-09-06T00:04:24.677808332Z" level=info msg="CreateContainer within sandbox \"4ba2a82c8e9583bd554cf810f0b2538c6dbe89f3d3f12d7d1210c4e94141cd93\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:04:24.698655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount825069634.mount: Deactivated successfully. Sep 6 00:04:24.716651 env[1827]: time="2025-09-06T00:04:24.716579844Z" level=info msg="CreateContainer within sandbox \"4ba2a82c8e9583bd554cf810f0b2538c6dbe89f3d3f12d7d1210c4e94141cd93\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\"" Sep 6 00:04:24.718101 env[1827]: time="2025-09-06T00:04:24.718013358Z" level=info msg="StartContainer for \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\"" Sep 6 00:04:24.758584 systemd[1]: Started cri-containerd-7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af.scope. Sep 6 00:04:24.830055 env[1827]: time="2025-09-06T00:04:24.829970610Z" level=info msg="StartContainer for \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\" returns successfully" Sep 6 00:04:25.515509 kubelet[2856]: E0906 00:04:25.515383 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:04:25.788798 kubelet[2856]: I0906 00:04:25.788592 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-df5tq" podStartSLOduration=1.824193382 podStartE2EDuration="15.78856941s" podCreationTimestamp="2025-09-06 00:04:10 +0000 UTC" firstStartedPulling="2025-09-06 00:04:10.70672486 +0000 UTC m=+4.698937161" lastFinishedPulling="2025-09-06 00:04:24.6711009 +0000 UTC m=+18.663313189" observedRunningTime="2025-09-06 00:04:25.605957497 +0000 UTC m=+19.598169834" watchObservedRunningTime="2025-09-06 00:04:25.78856941 +0000 UTC m=+19.780781759" Sep 6 00:04:25.983577 kubelet[2856]: W0906 00:04:25.983507 2856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod840711c9_46ec_4cfc_892d_6055289a4794.slice/cri-containerd-d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4.scope WatchSource:0}: container "d52bc29f9b12658fdf335414c9c8641c52cfa319e68ed631ade3a01818de2ed4" in namespace "k8s.io": not found Sep 6 00:04:29.106753 kubelet[2856]: W0906 00:04:29.106694 2856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod840711c9_46ec_4cfc_892d_6055289a4794.slice/cri-containerd-92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163.scope WatchSource:0}: task 92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163 not found: not found Sep 6 00:04:38.337201 env[1827]: time="2025-09-06T00:04:38.336287760Z" level=info msg="CreateContainer within sandbox \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Sep 6 00:04:38.356713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1885217165.mount: Deactivated successfully. Sep 6 00:04:38.371268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454103520.mount: Deactivated successfully. Sep 6 00:04:38.378675 env[1827]: time="2025-09-06T00:04:38.378571346Z" level=info msg="CreateContainer within sandbox \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054\"" Sep 6 00:04:38.384809 env[1827]: time="2025-09-06T00:04:38.384732223Z" level=info msg="StartContainer for \"1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054\"" Sep 6 00:04:38.420216 systemd[1]: Started cri-containerd-1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054.scope. Sep 6 00:04:38.457679 systemd[1]: cri-containerd-1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054.scope: Deactivated successfully. Sep 6 00:04:38.558938 env[1827]: time="2025-09-06T00:04:38.558825282Z" level=info msg="shim disconnected" id=1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054 Sep 6 00:04:38.559271 env[1827]: time="2025-09-06T00:04:38.559239212Z" level=warning msg="cleaning up after shim disconnected" id=1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054 namespace=k8s.io Sep 6 00:04:38.559439 env[1827]: time="2025-09-06T00:04:38.559409966Z" level=info msg="cleaning up dead shim" Sep 6 00:04:38.574090 env[1827]: time="2025-09-06T00:04:38.574022800Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3334 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:04:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:04:38.574929 env[1827]: time="2025-09-06T00:04:38.574782030Z" level=error msg="copy shim log" error="read /proc/self/fd/61: file already closed" Sep 6 00:04:38.575672 env[1827]: time="2025-09-06T00:04:38.575611978Z" level=error msg="Failed to pipe stderr of container \"1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054\"" error="reading from a closed fifo" Sep 6 00:04:38.575783 env[1827]: time="2025-09-06T00:04:38.575750918Z" level=error msg="Failed to pipe stdout of container \"1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054\"" error="reading from a closed fifo" Sep 6 00:04:38.577867 env[1827]: time="2025-09-06T00:04:38.577785263Z" level=error msg="StartContainer for \"1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:04:38.578165 kubelet[2856]: E0906 00:04:38.578075 2856 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054" Sep 6 00:04:38.578811 kubelet[2856]: E0906 00:04:38.578249 2856 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 6 00:04:38.578811 kubelet[2856]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:04:38.578811 kubelet[2856]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:04:38.578811 kubelet[2856]: rm /hostbin/cilium-mount Sep 6 00:04:38.578811 kubelet[2856]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s529m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:04:38.578811 kubelet[2856]: > logger="UnhandledError" Sep 6 00:04:38.580282 kubelet[2856]: E0906 00:04:38.580205 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:04:39.350378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054-rootfs.mount: Deactivated successfully. Sep 6 00:04:39.558180 kubelet[2856]: I0906 00:04:39.558130 2856 scope.go:117] "RemoveContainer" containerID="92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163" Sep 6 00:04:39.558767 kubelet[2856]: I0906 00:04:39.558729 2856 scope.go:117] "RemoveContainer" containerID="92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163" Sep 6 00:04:39.563380 env[1827]: time="2025-09-06T00:04:39.562904887Z" level=info msg="RemoveContainer for \"92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163\"" Sep 6 00:04:39.568217 env[1827]: time="2025-09-06T00:04:39.568153907Z" level=info msg="RemoveContainer for \"92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163\" returns successfully" Sep 6 00:04:39.573265 env[1827]: time="2025-09-06T00:04:39.573086562Z" level=error msg="ContainerStatus for \"92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163\": not found" Sep 6 00:04:39.574461 kubelet[2856]: E0906 00:04:39.574386 2856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163\": not found" containerID="92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163" Sep 6 00:04:39.574717 kubelet[2856]: E0906 00:04:39.574492 2856 kuberuntime_container.go:896] "Unhandled Error" err="failed to remove pod init container \"mount-cgroup\": failed to get container status \"92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163\": rpc error: code = NotFound desc = an error occurred when try to find container \"92eab87efae4cf491df36c17b743bc9c9587a7f5df48105997c916c73423c163\": not found; Skipping pod \"cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" logger="UnhandledError" Sep 6 00:04:39.577027 kubelet[2856]: E0906 00:04:39.576950 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:04:41.666070 kubelet[2856]: W0906 00:04:41.665972 2856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod840711c9_46ec_4cfc_892d_6055289a4794.slice/cri-containerd-1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054.scope WatchSource:0}: task 1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054 not found: not found Sep 6 00:04:52.334374 kubelet[2856]: E0906 00:04:52.334228 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:05:01.706161 amazon-ssm-agent[1800]: 2025-09-06 00:05:01 INFO [HealthCheck] HealthCheck reporting agent health. Sep 6 00:05:06.340224 env[1827]: time="2025-09-06T00:05:06.340162205Z" level=info msg="CreateContainer within sandbox \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Sep 6 00:05:06.361085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712834731.mount: Deactivated successfully. Sep 6 00:05:06.370310 env[1827]: time="2025-09-06T00:05:06.370224008Z" level=info msg="CreateContainer within sandbox \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1\"" Sep 6 00:05:06.372297 env[1827]: time="2025-09-06T00:05:06.371484805Z" level=info msg="StartContainer for \"bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1\"" Sep 6 00:05:06.415264 systemd[1]: Started cri-containerd-bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1.scope. Sep 6 00:05:06.464220 systemd[1]: cri-containerd-bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1.scope: Deactivated successfully. Sep 6 00:05:06.477755 env[1827]: time="2025-09-06T00:05:06.477669836Z" level=info msg="shim disconnected" id=bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1 Sep 6 00:05:06.477755 env[1827]: time="2025-09-06T00:05:06.477743719Z" level=warning msg="cleaning up after shim disconnected" id=bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1 namespace=k8s.io Sep 6 00:05:06.478113 env[1827]: time="2025-09-06T00:05:06.477766447Z" level=info msg="cleaning up dead shim" Sep 6 00:05:06.498669 env[1827]: time="2025-09-06T00:05:06.498593861Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:05:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3377 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:05:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:05:06.499418 env[1827]: time="2025-09-06T00:05:06.499301250Z" level=error msg="copy shim log" error="read /proc/self/fd/48: file already closed" Sep 6 00:05:06.502191 env[1827]: time="2025-09-06T00:05:06.500342211Z" level=error msg="Failed to pipe stderr of container \"bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1\"" error="reading from a closed fifo" Sep 6 00:05:06.502767 env[1827]: time="2025-09-06T00:05:06.500273404Z" level=error msg="Failed to pipe stdout of container \"bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1\"" error="reading from a closed fifo" Sep 6 00:05:06.503281 env[1827]: time="2025-09-06T00:05:06.503218399Z" level=error msg="StartContainer for \"bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:05:06.503727 kubelet[2856]: E0906 00:05:06.503662 2856 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1" Sep 6 00:05:06.504305 kubelet[2856]: E0906 00:05:06.503920 2856 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 6 00:05:06.504305 kubelet[2856]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:05:06.504305 kubelet[2856]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:05:06.504305 kubelet[2856]: rm /hostbin/cilium-mount Sep 6 00:05:06.504305 kubelet[2856]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s529m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:05:06.504305 kubelet[2856]: > logger="UnhandledError" Sep 6 00:05:06.505085 kubelet[2856]: E0906 00:05:06.505031 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:05:06.627953 kubelet[2856]: I0906 00:05:06.627806 2856 scope.go:117] "RemoveContainer" containerID="1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054" Sep 6 00:05:06.629449 kubelet[2856]: I0906 00:05:06.629110 2856 scope.go:117] "RemoveContainer" containerID="1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054" Sep 6 00:05:06.632799 env[1827]: time="2025-09-06T00:05:06.632744651Z" level=info msg="RemoveContainer for \"1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054\"" Sep 6 00:05:06.636975 env[1827]: time="2025-09-06T00:05:06.636915908Z" level=info msg="RemoveContainer for \"1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054\" returns successfully" Sep 6 00:05:06.637775 kubelet[2856]: E0906 00:05:06.637729 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:05:06.638909 env[1827]: time="2025-09-06T00:05:06.638859675Z" level=info msg="RemoveContainer for \"1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054\"" Sep 6 00:05:06.639316 env[1827]: time="2025-09-06T00:05:06.639250713Z" level=info msg="RemoveContainer for \"1ed1673de1212cf73d7b93273a3c97b897caa53186bfafd57c60fe49e3b94054\" returns successfully" Sep 6 00:05:07.354154 systemd[1]: run-containerd-runc-k8s.io-bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1-runc.m87yJ1.mount: Deactivated successfully. Sep 6 00:05:07.354320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1-rootfs.mount: Deactivated successfully. Sep 6 00:05:09.584056 kubelet[2856]: W0906 00:05:09.583978 2856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod840711c9_46ec_4cfc_892d_6055289a4794.slice/cri-containerd-bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1.scope WatchSource:0}: task bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1 not found: not found Sep 6 00:05:20.332042 kubelet[2856]: E0906 00:05:20.331990 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:05:35.331684 kubelet[2856]: E0906 00:05:35.331617 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:05:46.332174 kubelet[2856]: E0906 00:05:46.332101 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:05:59.335168 env[1827]: time="2025-09-06T00:05:59.335088368Z" level=info msg="CreateContainer within sandbox \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Sep 6 00:05:59.356135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260688791.mount: Deactivated successfully. Sep 6 00:05:59.367464 env[1827]: time="2025-09-06T00:05:59.366978815Z" level=info msg="CreateContainer within sandbox \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be\"" Sep 6 00:05:59.370600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2698774483.mount: Deactivated successfully. Sep 6 00:05:59.372250 env[1827]: time="2025-09-06T00:05:59.372169070Z" level=info msg="StartContainer for \"cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be\"" Sep 6 00:05:59.414302 systemd[1]: Started cri-containerd-cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be.scope. Sep 6 00:05:59.453560 systemd[1]: cri-containerd-cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be.scope: Deactivated successfully. Sep 6 00:05:59.469383 env[1827]: time="2025-09-06T00:05:59.469274964Z" level=info msg="shim disconnected" id=cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be Sep 6 00:05:59.469776 env[1827]: time="2025-09-06T00:05:59.469742273Z" level=warning msg="cleaning up after shim disconnected" id=cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be namespace=k8s.io Sep 6 00:05:59.469893 env[1827]: time="2025-09-06T00:05:59.469866294Z" level=info msg="cleaning up dead shim" Sep 6 00:05:59.483547 env[1827]: time="2025-09-06T00:05:59.483483944Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:05:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3420 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:05:59Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:05:59.484235 env[1827]: time="2025-09-06T00:05:59.484156887Z" level=error msg="copy shim log" error="read /proc/self/fd/48: file already closed" Sep 6 00:05:59.484644 env[1827]: time="2025-09-06T00:05:59.484575559Z" level=error msg="Failed to pipe stdout of container \"cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be\"" error="reading from a closed fifo" Sep 6 00:05:59.484836 env[1827]: time="2025-09-06T00:05:59.484790733Z" level=error msg="Failed to pipe stderr of container \"cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be\"" error="reading from a closed fifo" Sep 6 00:05:59.486853 env[1827]: time="2025-09-06T00:05:59.486790817Z" level=error msg="StartContainer for \"cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:05:59.487309 kubelet[2856]: E0906 00:05:59.487247 2856 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be" Sep 6 00:05:59.488000 kubelet[2856]: E0906 00:05:59.487447 2856 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 6 00:05:59.488000 kubelet[2856]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:05:59.488000 kubelet[2856]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:05:59.488000 kubelet[2856]: rm /hostbin/cilium-mount Sep 6 00:05:59.488000 kubelet[2856]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s529m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:05:59.488000 kubelet[2856]: > logger="UnhandledError" Sep 6 00:05:59.498846 kubelet[2856]: E0906 00:05:59.488792 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:05:59.757453 kubelet[2856]: I0906 00:05:59.756170 2856 scope.go:117] "RemoveContainer" containerID="bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1" Sep 6 00:05:59.757453 kubelet[2856]: I0906 00:05:59.756995 2856 scope.go:117] "RemoveContainer" containerID="bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1" Sep 6 00:05:59.760995 env[1827]: time="2025-09-06T00:05:59.760928925Z" level=info msg="RemoveContainer for \"bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1\"" Sep 6 00:05:59.761312 env[1827]: time="2025-09-06T00:05:59.761271180Z" level=info msg="RemoveContainer for \"bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1\"" Sep 6 00:05:59.761729 env[1827]: time="2025-09-06T00:05:59.761673880Z" level=error msg="RemoveContainer for \"bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1\" failed" error="failed to set removing state for container \"bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1\": container is already in removing state" Sep 6 00:05:59.762294 kubelet[2856]: E0906 00:05:59.762121 2856 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1\": container is already in removing state" containerID="bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1" Sep 6 00:05:59.762294 kubelet[2856]: I0906 00:05:59.762189 2856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1"} err="rpc error: code = Unknown desc = failed to set removing state for container \"bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1\": container is already in removing state" Sep 6 00:05:59.767693 env[1827]: time="2025-09-06T00:05:59.767633103Z" level=info msg="RemoveContainer for \"bc8d11ab0c0bf7aeb24ae65c3660571d8287876016f9e62c71f9731bc25575a1\" returns successfully" Sep 6 00:05:59.769518 kubelet[2856]: E0906 00:05:59.769074 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:06:00.347798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be-rootfs.mount: Deactivated successfully. Sep 6 00:06:02.578278 kubelet[2856]: W0906 00:06:02.578214 2856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod840711c9_46ec_4cfc_892d_6055289a4794.slice/cri-containerd-cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be.scope WatchSource:0}: task cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be not found: not found Sep 6 00:06:06.288875 kubelet[2856]: E0906 00:06:06.288815 2856 kubelet_node_status.go:447] "Node not becoming ready in time after startup" Sep 6 00:06:06.524929 kubelet[2856]: E0906 00:06:06.524884 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:10.332593 kubelet[2856]: E0906 00:06:10.332059 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:06:11.526642 kubelet[2856]: E0906 00:06:11.526591 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:16.529002 kubelet[2856]: E0906 00:06:16.528890 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:21.531083 kubelet[2856]: E0906 00:06:21.531035 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:23.332480 kubelet[2856]: E0906 00:06:23.332430 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:06:26.532641 kubelet[2856]: E0906 00:06:26.532574 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:31.534010 kubelet[2856]: E0906 00:06:31.533844 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:36.536019 kubelet[2856]: E0906 00:06:36.535948 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:38.331856 kubelet[2856]: E0906 00:06:38.331806 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:06:41.536786 kubelet[2856]: E0906 00:06:41.536723 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:46.538034 kubelet[2856]: E0906 00:06:46.537974 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:51.332587 kubelet[2856]: E0906 00:06:51.332460 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:06:51.540145 kubelet[2856]: E0906 00:06:51.540099 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:56.541746 kubelet[2856]: E0906 00:06:56.541673 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:01.542987 kubelet[2856]: E0906 00:07:01.542860 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:04.332365 kubelet[2856]: E0906 00:07:04.332295 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:07:06.544244 kubelet[2856]: E0906 00:07:06.544176 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:11.546260 kubelet[2856]: E0906 00:07:11.546209 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:16.331944 kubelet[2856]: E0906 00:07:16.331893 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:07:16.547898 kubelet[2856]: E0906 00:07:16.547833 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:21.549742 kubelet[2856]: E0906 00:07:21.549678 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:26.550668 kubelet[2856]: E0906 00:07:26.550574 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:29.335787 env[1827]: time="2025-09-06T00:07:29.335726039Z" level=info msg="CreateContainer within sandbox \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Sep 6 00:07:29.365537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2192624986.mount: Deactivated successfully. Sep 6 00:07:29.371189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895404002.mount: Deactivated successfully. Sep 6 00:07:29.379578 env[1827]: time="2025-09-06T00:07:29.379502042Z" level=info msg="CreateContainer within sandbox \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f\"" Sep 6 00:07:29.380708 env[1827]: time="2025-09-06T00:07:29.380660525Z" level=info msg="StartContainer for \"5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f\"" Sep 6 00:07:29.426622 systemd[1]: Started cri-containerd-5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f.scope. Sep 6 00:07:29.455557 systemd[1]: cri-containerd-5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f.scope: Deactivated successfully. Sep 6 00:07:29.470711 env[1827]: time="2025-09-06T00:07:29.470614766Z" level=info msg="shim disconnected" id=5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f Sep 6 00:07:29.470711 env[1827]: time="2025-09-06T00:07:29.470706779Z" level=warning msg="cleaning up after shim disconnected" id=5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f namespace=k8s.io Sep 6 00:07:29.471088 env[1827]: time="2025-09-06T00:07:29.470734870Z" level=info msg="cleaning up dead shim" Sep 6 00:07:29.485102 env[1827]: time="2025-09-06T00:07:29.484994852Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3468 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:07:29Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:07:29.485611 env[1827]: time="2025-09-06T00:07:29.485511960Z" level=error msg="copy shim log" error="read /proc/self/fd/48: file already closed" Sep 6 00:07:29.488224 env[1827]: time="2025-09-06T00:07:29.488103895Z" level=error msg="Failed to pipe stderr of container \"5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f\"" error="reading from a closed fifo" Sep 6 00:07:29.490918 env[1827]: time="2025-09-06T00:07:29.488181664Z" level=error msg="Failed to pipe stdout of container \"5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f\"" error="reading from a closed fifo" Sep 6 00:07:29.493909 env[1827]: time="2025-09-06T00:07:29.493817881Z" level=error msg="StartContainer for \"5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:07:29.495192 kubelet[2856]: E0906 00:07:29.494466 2856 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f" Sep 6 00:07:29.495192 kubelet[2856]: E0906 00:07:29.494752 2856 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 6 00:07:29.495192 kubelet[2856]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:07:29.495192 kubelet[2856]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:07:29.495192 kubelet[2856]: rm /hostbin/cilium-mount Sep 6 00:07:29.495192 kubelet[2856]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s529m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:07:29.495192 kubelet[2856]: > logger="UnhandledError" Sep 6 00:07:29.497374 kubelet[2856]: E0906 00:07:29.497300 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:07:29.954504 kubelet[2856]: I0906 00:07:29.954463 2856 scope.go:117] "RemoveContainer" containerID="cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be" Sep 6 00:07:29.955382 kubelet[2856]: I0906 00:07:29.955335 2856 scope.go:117] "RemoveContainer" containerID="cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be" Sep 6 00:07:29.958565 env[1827]: time="2025-09-06T00:07:29.958445818Z" level=info msg="RemoveContainer for \"cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be\"" Sep 6 00:07:29.960039 env[1827]: time="2025-09-06T00:07:29.959527876Z" level=info msg="RemoveContainer for \"cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be\"" Sep 6 00:07:29.962664 env[1827]: time="2025-09-06T00:07:29.961073860Z" level=error msg="RemoveContainer for \"cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be\" failed" error="failed to set removing state for container \"cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be\": container is already in removing state" Sep 6 00:07:29.963135 kubelet[2856]: E0906 00:07:29.963094 2856 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be\": container is already in removing state" containerID="cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be" Sep 6 00:07:29.963330 kubelet[2856]: I0906 00:07:29.963289 2856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be"} err="rpc error: code = Unknown desc = failed to set removing state for container \"cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be\": container is already in removing state" Sep 6 00:07:29.970431 env[1827]: time="2025-09-06T00:07:29.968622743Z" level=info msg="RemoveContainer for \"cc300a6d21870f8782a8a7d40e62c7494e9f8aa72778e7b3336ffca5d1e162be\" returns successfully" Sep 6 00:07:29.970894 kubelet[2856]: E0906 00:07:29.970837 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:07:30.352100 systemd[1]: run-containerd-runc-k8s.io-5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f-runc.KMfqrw.mount: Deactivated successfully. Sep 6 00:07:30.352269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f-rootfs.mount: Deactivated successfully. Sep 6 00:07:31.551859 kubelet[2856]: E0906 00:07:31.551804 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:32.576492 kubelet[2856]: W0906 00:07:32.576439 2856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod840711c9_46ec_4cfc_892d_6055289a4794.slice/cri-containerd-5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f.scope WatchSource:0}: task 5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f not found: not found Sep 6 00:07:36.552988 kubelet[2856]: E0906 00:07:36.552937 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:41.332441 kubelet[2856]: E0906 00:07:41.332223 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:07:41.554853 kubelet[2856]: E0906 00:07:41.554806 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:46.556653 kubelet[2856]: E0906 00:07:46.556511 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:51.558528 kubelet[2856]: E0906 00:07:51.558462 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:53.331991 kubelet[2856]: E0906 00:07:53.331917 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:07:56.560083 kubelet[2856]: E0906 00:07:56.560033 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:01.561725 kubelet[2856]: E0906 00:08:01.561661 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:06.563516 kubelet[2856]: E0906 00:08:06.563385 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:07.332733 kubelet[2856]: E0906 00:08:07.332668 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:08:11.564547 kubelet[2856]: E0906 00:08:11.564469 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:16.565504 kubelet[2856]: E0906 00:08:16.565438 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:21.331930 kubelet[2856]: E0906 00:08:21.331859 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:08:21.567378 kubelet[2856]: E0906 00:08:21.567323 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:26.568214 kubelet[2856]: E0906 00:08:26.568115 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:31.569664 kubelet[2856]: E0906 00:08:31.569588 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:35.332631 kubelet[2856]: E0906 00:08:35.332581 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:08:36.571812 kubelet[2856]: E0906 00:08:36.571745 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:41.574460 kubelet[2856]: E0906 00:08:41.574412 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:46.576477 kubelet[2856]: E0906 00:08:46.576430 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:50.331906 kubelet[2856]: E0906 00:08:50.331857 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:08:51.578366 kubelet[2856]: E0906 00:08:51.578294 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:56.579650 kubelet[2856]: E0906 00:08:56.579576 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:09:01.331667 kubelet[2856]: E0906 00:09:01.331603 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-bmgmx_kube-system(840711c9-46ec-4cfc-892d-6055289a4794)\"" pod="kube-system/cilium-bmgmx" podUID="840711c9-46ec-4cfc-892d-6055289a4794" Sep 6 00:09:01.581650 kubelet[2856]: E0906 00:09:01.581555 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:09:06.582698 kubelet[2856]: E0906 00:09:06.582648 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:09:10.290469 env[1827]: time="2025-09-06T00:09:10.290358075Z" level=info msg="StopPodSandbox for \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\"" Sep 6 00:09:10.291218 env[1827]: time="2025-09-06T00:09:10.291149809Z" level=info msg="Container to stop \"5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:09:10.294589 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372-shm.mount: Deactivated successfully. Sep 6 00:09:10.310385 systemd[1]: cri-containerd-a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372.scope: Deactivated successfully. Sep 6 00:09:10.355166 env[1827]: time="2025-09-06T00:09:10.355069649Z" level=info msg="StopContainer for \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\" with timeout 30 (s)" Sep 6 00:09:10.357427 env[1827]: time="2025-09-06T00:09:10.356024535Z" level=info msg="Stop container \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\" with signal terminated" Sep 6 00:09:10.374922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372-rootfs.mount: Deactivated successfully. Sep 6 00:09:10.389257 env[1827]: time="2025-09-06T00:09:10.389179875Z" level=info msg="shim disconnected" id=a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372 Sep 6 00:09:10.389257 env[1827]: time="2025-09-06T00:09:10.389253687Z" level=warning msg="cleaning up after shim disconnected" id=a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372 namespace=k8s.io Sep 6 00:09:10.389630 env[1827]: time="2025-09-06T00:09:10.389276475Z" level=info msg="cleaning up dead shim" Sep 6 00:09:10.413171 env[1827]: time="2025-09-06T00:09:10.413085725Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:09:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3522 runtime=io.containerd.runc.v2\n" Sep 6 00:09:10.413767 env[1827]: time="2025-09-06T00:09:10.413704156Z" level=info msg="TearDown network for sandbox \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" successfully" Sep 6 00:09:10.413767 env[1827]: time="2025-09-06T00:09:10.413758720Z" level=info msg="StopPodSandbox for \"a498b3aee020ddcaaa0739a07ecfee98ad7cb2a2c7c6ec494552ff8cb4b05372\" returns successfully" Sep 6 00:09:10.471052 sudo[2058]: pam_unix(sudo:session): session closed for user root Sep 6 00:09:10.494173 kubelet[2856]: I0906 00:09:10.494102 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:09:10.494918 kubelet[2856]: I0906 00:09:10.494009 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-cilium-cgroup\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.494918 kubelet[2856]: I0906 00:09:10.494285 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/840711c9-46ec-4cfc-892d-6055289a4794-clustermesh-secrets\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.494918 kubelet[2856]: I0906 00:09:10.494822 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-bpf-maps\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.494918 kubelet[2856]: I0906 00:09:10.494878 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-cilium-run\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.494918 kubelet[2856]: I0906 00:09:10.494918 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-cni-path\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.495239 kubelet[2856]: I0906 00:09:10.494963 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-lib-modules\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.495239 kubelet[2856]: I0906 00:09:10.495004 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s529m\" (UniqueName: \"kubernetes.io/projected/840711c9-46ec-4cfc-892d-6055289a4794-kube-api-access-s529m\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.495239 kubelet[2856]: I0906 00:09:10.495043 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-host-proc-sys-net\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.495239 kubelet[2856]: I0906 00:09:10.495082 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/840711c9-46ec-4cfc-892d-6055289a4794-hubble-tls\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.495239 kubelet[2856]: I0906 00:09:10.495151 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-xtables-lock\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.495239 kubelet[2856]: I0906 00:09:10.495188 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-etc-cni-netd\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.495239 kubelet[2856]: I0906 00:09:10.495222 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-hostproc\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.495693 kubelet[2856]: I0906 00:09:10.495263 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-host-proc-sys-kernel\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.495693 kubelet[2856]: I0906 00:09:10.495302 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/840711c9-46ec-4cfc-892d-6055289a4794-cilium-config-path\") pod \"840711c9-46ec-4cfc-892d-6055289a4794\" (UID: \"840711c9-46ec-4cfc-892d-6055289a4794\") " Sep 6 00:09:10.495693 kubelet[2856]: I0906 00:09:10.495360 2856 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-cilium-cgroup\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.499749 sshd[2055]: pam_unix(sshd:session): session closed for user core Sep 6 00:09:10.509545 kubelet[2856]: I0906 00:09:10.504349 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:09:10.509545 kubelet[2856]: I0906 00:09:10.504454 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-cni-path" (OuterVolumeSpecName: "cni-path") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:09:10.509545 kubelet[2856]: I0906 00:09:10.504498 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:09:10.509545 kubelet[2856]: I0906 00:09:10.505018 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:09:10.509545 kubelet[2856]: I0906 00:09:10.505077 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:09:10.509545 kubelet[2856]: I0906 00:09:10.505116 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:09:10.506355 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:09:10.506772 systemd[1]: session-5.scope: Consumed 14.069s CPU time. Sep 6 00:09:10.507599 systemd-logind[1812]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:09:10.507887 systemd[1]: sshd@4-172.31.30.45:22-147.75.109.163:33086.service: Deactivated successfully. Sep 6 00:09:10.513774 systemd-logind[1812]: Removed session 5. Sep 6 00:09:10.522968 kubelet[2856]: I0906 00:09:10.519683 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-hostproc" (OuterVolumeSpecName: "hostproc") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:09:10.522968 kubelet[2856]: I0906 00:09:10.519762 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:09:10.522968 kubelet[2856]: I0906 00:09:10.519802 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:09:10.521722 systemd[1]: var-lib-kubelet-pods-840711c9\x2d46ec\x2d4cfc\x2d892d\x2d6055289a4794-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:09:10.527800 systemd[1]: cri-containerd-7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af.scope: Deactivated successfully. Sep 6 00:09:10.536918 kubelet[2856]: I0906 00:09:10.536848 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/840711c9-46ec-4cfc-892d-6055289a4794-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:09:10.553752 kubelet[2856]: I0906 00:09:10.549690 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/840711c9-46ec-4cfc-892d-6055289a4794-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:09:10.569282 systemd[1]: var-lib-kubelet-pods-840711c9\x2d46ec\x2d4cfc\x2d892d\x2d6055289a4794-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds529m.mount: Deactivated successfully. Sep 6 00:09:10.569505 systemd[1]: var-lib-kubelet-pods-840711c9\x2d46ec\x2d4cfc\x2d892d\x2d6055289a4794-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:09:10.575806 kubelet[2856]: I0906 00:09:10.575727 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/840711c9-46ec-4cfc-892d-6055289a4794-kube-api-access-s529m" (OuterVolumeSpecName: "kube-api-access-s529m") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "kube-api-access-s529m". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:09:10.577672 kubelet[2856]: I0906 00:09:10.577594 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/840711c9-46ec-4cfc-892d-6055289a4794-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "840711c9-46ec-4cfc-892d-6055289a4794" (UID: "840711c9-46ec-4cfc-892d-6055289a4794"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:09:10.596224 kubelet[2856]: I0906 00:09:10.596133 2856 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-host-proc-sys-net\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.596224 kubelet[2856]: I0906 00:09:10.596192 2856 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-lib-modules\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.596224 kubelet[2856]: I0906 00:09:10.596217 2856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s529m\" (UniqueName: \"kubernetes.io/projected/840711c9-46ec-4cfc-892d-6055289a4794-kube-api-access-s529m\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.596585 kubelet[2856]: I0906 00:09:10.596239 2856 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/840711c9-46ec-4cfc-892d-6055289a4794-hubble-tls\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.596585 kubelet[2856]: I0906 00:09:10.596285 2856 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-xtables-lock\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.596585 kubelet[2856]: I0906 00:09:10.596559 2856 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-etc-cni-netd\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.596585 kubelet[2856]: I0906 00:09:10.596585 2856 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-hostproc\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.596835 kubelet[2856]: I0906 00:09:10.596609 2856 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-host-proc-sys-kernel\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.596835 kubelet[2856]: I0906 00:09:10.596635 2856 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/840711c9-46ec-4cfc-892d-6055289a4794-cilium-config-path\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.596835 kubelet[2856]: I0906 00:09:10.596657 2856 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/840711c9-46ec-4cfc-892d-6055289a4794-clustermesh-secrets\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.596835 kubelet[2856]: I0906 00:09:10.596677 2856 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-bpf-maps\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.596835 kubelet[2856]: I0906 00:09:10.596697 2856 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-cilium-run\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.596835 kubelet[2856]: I0906 00:09:10.596717 2856 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/840711c9-46ec-4cfc-892d-6055289a4794-cni-path\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.609997 env[1827]: time="2025-09-06T00:09:10.609924187Z" level=info msg="shim disconnected" id=7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af Sep 6 00:09:10.610263 env[1827]: time="2025-09-06T00:09:10.609998587Z" level=warning msg="cleaning up after shim disconnected" id=7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af namespace=k8s.io Sep 6 00:09:10.610263 env[1827]: time="2025-09-06T00:09:10.610020979Z" level=info msg="cleaning up dead shim" Sep 6 00:09:10.623662 env[1827]: time="2025-09-06T00:09:10.623593103Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:09:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3552 runtime=io.containerd.runc.v2\n" Sep 6 00:09:10.627584 env[1827]: time="2025-09-06T00:09:10.627522210Z" level=info msg="StopContainer for \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\" returns successfully" Sep 6 00:09:10.628279 env[1827]: time="2025-09-06T00:09:10.628227473Z" level=info msg="StopPodSandbox for \"4ba2a82c8e9583bd554cf810f0b2538c6dbe89f3d3f12d7d1210c4e94141cd93\"" Sep 6 00:09:10.628482 env[1827]: time="2025-09-06T00:09:10.628317280Z" level=info msg="Container to stop \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:09:10.639980 systemd[1]: cri-containerd-4ba2a82c8e9583bd554cf810f0b2538c6dbe89f3d3f12d7d1210c4e94141cd93.scope: Deactivated successfully. Sep 6 00:09:10.684835 env[1827]: time="2025-09-06T00:09:10.684640147Z" level=info msg="shim disconnected" id=4ba2a82c8e9583bd554cf810f0b2538c6dbe89f3d3f12d7d1210c4e94141cd93 Sep 6 00:09:10.684835 env[1827]: time="2025-09-06T00:09:10.684817854Z" level=warning msg="cleaning up after shim disconnected" id=4ba2a82c8e9583bd554cf810f0b2538c6dbe89f3d3f12d7d1210c4e94141cd93 namespace=k8s.io Sep 6 00:09:10.685187 env[1827]: time="2025-09-06T00:09:10.684843294Z" level=info msg="cleaning up dead shim" Sep 6 00:09:10.700621 env[1827]: time="2025-09-06T00:09:10.700546628Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:09:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3584 runtime=io.containerd.runc.v2\n" Sep 6 00:09:10.701232 env[1827]: time="2025-09-06T00:09:10.701123755Z" level=info msg="TearDown network for sandbox \"4ba2a82c8e9583bd554cf810f0b2538c6dbe89f3d3f12d7d1210c4e94141cd93\" successfully" Sep 6 00:09:10.701445 env[1827]: time="2025-09-06T00:09:10.701381358Z" level=info msg="StopPodSandbox for \"4ba2a82c8e9583bd554cf810f0b2538c6dbe89f3d3f12d7d1210c4e94141cd93\" returns successfully" Sep 6 00:09:10.800429 kubelet[2856]: I0906 00:09:10.797547 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dsw5\" (UniqueName: \"kubernetes.io/projected/98b9ef45-5a24-41ed-a7fe-c8f78c4523be-kube-api-access-5dsw5\") pod \"98b9ef45-5a24-41ed-a7fe-c8f78c4523be\" (UID: \"98b9ef45-5a24-41ed-a7fe-c8f78c4523be\") " Sep 6 00:09:10.800429 kubelet[2856]: I0906 00:09:10.797615 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98b9ef45-5a24-41ed-a7fe-c8f78c4523be-cilium-config-path\") pod \"98b9ef45-5a24-41ed-a7fe-c8f78c4523be\" (UID: \"98b9ef45-5a24-41ed-a7fe-c8f78c4523be\") " Sep 6 00:09:10.803163 kubelet[2856]: I0906 00:09:10.803104 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98b9ef45-5a24-41ed-a7fe-c8f78c4523be-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "98b9ef45-5a24-41ed-a7fe-c8f78c4523be" (UID: "98b9ef45-5a24-41ed-a7fe-c8f78c4523be"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:09:10.806449 kubelet[2856]: I0906 00:09:10.804504 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b9ef45-5a24-41ed-a7fe-c8f78c4523be-kube-api-access-5dsw5" (OuterVolumeSpecName: "kube-api-access-5dsw5") pod "98b9ef45-5a24-41ed-a7fe-c8f78c4523be" (UID: "98b9ef45-5a24-41ed-a7fe-c8f78c4523be"). InnerVolumeSpecName "kube-api-access-5dsw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:09:10.898595 kubelet[2856]: I0906 00:09:10.898550 2856 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98b9ef45-5a24-41ed-a7fe-c8f78c4523be-cilium-config-path\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:10.898839 kubelet[2856]: I0906 00:09:10.898814 2856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dsw5\" (UniqueName: \"kubernetes.io/projected/98b9ef45-5a24-41ed-a7fe-c8f78c4523be-kube-api-access-5dsw5\") on node \"ip-172-31-30-45\" DevicePath \"\"" Sep 6 00:09:11.183697 kubelet[2856]: I0906 00:09:11.183659 2856 scope.go:117] "RemoveContainer" containerID="7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af" Sep 6 00:09:11.186942 env[1827]: time="2025-09-06T00:09:11.186878926Z" level=info msg="RemoveContainer for \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\"" Sep 6 00:09:11.192976 env[1827]: time="2025-09-06T00:09:11.192905222Z" level=info msg="RemoveContainer for \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\" returns successfully" Sep 6 00:09:11.198101 kubelet[2856]: I0906 00:09:11.198066 2856 scope.go:117] "RemoveContainer" containerID="7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af" Sep 6 00:09:11.199101 systemd[1]: Removed slice kubepods-besteffort-pod98b9ef45_5a24_41ed_a7fe_c8f78c4523be.slice. Sep 6 00:09:11.203034 env[1827]: time="2025-09-06T00:09:11.201704451Z" level=error msg="ContainerStatus for \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\": not found" Sep 6 00:09:11.205208 kubelet[2856]: E0906 00:09:11.205127 2856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\": not found" containerID="7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af" Sep 6 00:09:11.205208 kubelet[2856]: I0906 00:09:11.205194 2856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af"} err="failed to get container status \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\": rpc error: code = NotFound desc = an error occurred when try to find container \"7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af\": not found" Sep 6 00:09:11.205727 kubelet[2856]: I0906 00:09:11.205237 2856 scope.go:117] "RemoveContainer" containerID="5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f" Sep 6 00:09:11.210028 systemd[1]: Removed slice kubepods-burstable-pod840711c9_46ec_4cfc_892d_6055289a4794.slice. Sep 6 00:09:11.216166 env[1827]: time="2025-09-06T00:09:11.216086060Z" level=info msg="RemoveContainer for \"5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f\"" Sep 6 00:09:11.222209 env[1827]: time="2025-09-06T00:09:11.222144384Z" level=info msg="RemoveContainer for \"5335c5f8a4a3c026eb636c5738c3af065376f393da4d687293a65bb89b60c33f\" returns successfully" Sep 6 00:09:11.294376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bf0ebe639e67849240e736a6c0c88d7373520214deaec174bb136fbe4f720af-rootfs.mount: Deactivated successfully. Sep 6 00:09:11.294560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ba2a82c8e9583bd554cf810f0b2538c6dbe89f3d3f12d7d1210c4e94141cd93-rootfs.mount: Deactivated successfully. Sep 6 00:09:11.294706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ba2a82c8e9583bd554cf810f0b2538c6dbe89f3d3f12d7d1210c4e94141cd93-shm.mount: Deactivated successfully. Sep 6 00:09:11.294852 systemd[1]: var-lib-kubelet-pods-98b9ef45\x2d5a24\x2d41ed\x2da7fe\x2dc8f78c4523be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5dsw5.mount: Deactivated successfully. Sep 6 00:09:11.584501 kubelet[2856]: E0906 00:09:11.584434 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:09:12.335735 kubelet[2856]: I0906 00:09:12.335689 2856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="840711c9-46ec-4cfc-892d-6055289a4794" path="/var/lib/kubelet/pods/840711c9-46ec-4cfc-892d-6055289a4794/volumes" Sep 6 00:09:12.336965 kubelet[2856]: I0906 00:09:12.336934 2856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b9ef45-5a24-41ed-a7fe-c8f78c4523be" path="/var/lib/kubelet/pods/98b9ef45-5a24-41ed-a7fe-c8f78c4523be/volumes" Sep 6 00:09:16.585775 kubelet[2856]: E0906 00:09:16.585606 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:09:21.587160 kubelet[2856]: E0906 00:09:21.587114 2856 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:09:25.169974 systemd[1]: cri-containerd-0dbb58dc21467361fbea1bfa0bc4438f0c345c4bbb9102654ce7e6e736cec374.scope: Deactivated successfully. Sep 6 00:09:25.170564 systemd[1]: cri-containerd-0dbb58dc21467361fbea1bfa0bc4438f0c345c4bbb9102654ce7e6e736cec374.scope: Consumed 6.323s CPU time. Sep 6 00:09:25.208421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0dbb58dc21467361fbea1bfa0bc4438f0c345c4bbb9102654ce7e6e736cec374-rootfs.mount: Deactivated successfully. Sep 6 00:09:25.228782 env[1827]: time="2025-09-06T00:09:25.228720360Z" level=info msg="shim disconnected" id=0dbb58dc21467361fbea1bfa0bc4438f0c345c4bbb9102654ce7e6e736cec374 Sep 6 00:09:25.229670 env[1827]: time="2025-09-06T00:09:25.229622832Z" level=warning msg="cleaning up after shim disconnected" id=0dbb58dc21467361fbea1bfa0bc4438f0c345c4bbb9102654ce7e6e736cec374 namespace=k8s.io Sep 6 00:09:25.229804 env[1827]: time="2025-09-06T00:09:25.229776504Z" level=info msg="cleaning up dead shim" Sep 6 00:09:25.246995 env[1827]: time="2025-09-06T00:09:25.246938842Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:09:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3614 runtime=io.containerd.runc.v2\n"