Sep 6 00:05:41.991563 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 6 00:05:41.991600 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 5 23:00:12 -00 2025 Sep 6 00:05:41.991622 kernel: efi: EFI v2.70 by EDK II Sep 6 00:05:41.991651 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x716fcf98 Sep 6 00:05:41.991667 kernel: ACPI: Early table checksum verification disabled Sep 6 00:05:41.991681 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 6 00:05:41.991697 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 6 00:05:41.991712 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 6 00:05:41.991737 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 6 00:05:41.991754 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 6 00:05:41.991774 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 6 00:05:41.991789 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 6 00:05:41.991803 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 6 00:05:41.991817 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 6 00:05:41.991834 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 6 00:05:41.991853 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 6 00:05:41.991868 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 6 00:05:41.996856 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 6 00:05:41.997005 kernel: printk: bootconsole [uart0] enabled Sep 6 00:05:41.997028 kernel: NUMA: Failed to initialise from firmware Sep 6 00:05:41.997044 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 6 00:05:41.997060 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Sep 6 00:05:41.997075 kernel: Zone ranges: Sep 6 00:05:41.997090 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 6 00:05:41.997119 kernel: DMA32 empty Sep 6 00:05:41.997152 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 6 00:05:41.997181 kernel: Movable zone start for each node Sep 6 00:05:41.997196 kernel: Early memory node ranges Sep 6 00:05:41.997211 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 6 00:05:41.997226 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 6 00:05:41.997241 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 6 00:05:41.997256 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 6 00:05:41.997271 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 6 00:05:41.997285 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 6 00:05:41.997300 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 6 00:05:41.997316 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 6 00:05:41.997330 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 6 00:05:41.997345 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 6 00:05:41.997365 kernel: psci: probing for conduit method from ACPI. Sep 6 00:05:41.997381 kernel: psci: PSCIv1.0 detected in firmware. Sep 6 00:05:41.997402 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 00:05:41.997418 kernel: psci: Trusted OS migration not required Sep 6 00:05:41.997434 kernel: psci: SMC Calling Convention v1.1 Sep 6 00:05:41.997454 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 6 00:05:41.997469 kernel: ACPI: SRAT not present Sep 6 00:05:41.997486 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 6 00:05:41.997502 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 6 00:05:41.997519 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 6 00:05:41.997535 kernel: Detected PIPT I-cache on CPU0 Sep 6 00:05:41.997550 kernel: CPU features: detected: GIC system register CPU interface Sep 6 00:05:41.997566 kernel: CPU features: detected: Spectre-v2 Sep 6 00:05:41.997582 kernel: CPU features: detected: Spectre-v3a Sep 6 00:05:41.997597 kernel: CPU features: detected: Spectre-BHB Sep 6 00:05:41.997613 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 00:05:41.997633 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 00:05:41.997649 kernel: CPU features: detected: ARM erratum 1742098 Sep 6 00:05:41.997665 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 6 00:05:41.997680 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 6 00:05:41.997696 kernel: Policy zone: Normal Sep 6 00:05:41.997714 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:05:41.997731 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:05:41.997747 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:05:41.997763 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:05:41.997778 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:05:41.997799 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 6 00:05:41.997817 kernel: Memory: 3824460K/4030464K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 206004K reserved, 0K cma-reserved) Sep 6 00:05:41.997833 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:05:41.997848 kernel: trace event string verifier disabled Sep 6 00:05:41.997864 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 00:05:41.997880 kernel: rcu: RCU event tracing is enabled. Sep 6 00:05:41.997960 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:05:41.997979 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 00:05:41.997995 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:05:41.998011 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:05:41.998027 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:05:41.998042 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 00:05:41.998065 kernel: GICv3: 96 SPIs implemented Sep 6 00:05:41.998081 kernel: GICv3: 0 Extended SPIs implemented Sep 6 00:05:41.998096 kernel: GICv3: Distributor has no Range Selector support Sep 6 00:05:41.998112 kernel: Root IRQ handler: gic_handle_irq Sep 6 00:05:41.998127 kernel: GICv3: 16 PPIs implemented Sep 6 00:05:41.998143 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 6 00:05:41.998158 kernel: ACPI: SRAT not present Sep 6 00:05:41.998173 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 6 00:05:41.998189 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Sep 6 00:05:41.998205 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Sep 6 00:05:41.998221 kernel: GICv3: using LPI property table @0x00000004000b0000 Sep 6 00:05:41.998241 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 6 00:05:41.998258 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Sep 6 00:05:41.998273 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 6 00:05:41.998289 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 6 00:05:41.998305 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 6 00:05:41.998321 kernel: Console: colour dummy device 80x25 Sep 6 00:05:41.998337 kernel: printk: console [tty1] enabled Sep 6 00:05:41.998353 kernel: ACPI: Core revision 20210730 Sep 6 00:05:41.998370 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 6 00:05:41.998386 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:05:41.998406 kernel: LSM: Security Framework initializing Sep 6 00:05:41.998423 kernel: SELinux: Initializing. Sep 6 00:05:41.998439 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:05:41.998455 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:05:41.998471 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:05:41.998488 kernel: Platform MSI: ITS@0x10080000 domain created Sep 6 00:05:41.998504 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 6 00:05:41.998520 kernel: Remapping and enabling EFI services. Sep 6 00:05:41.998536 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:05:41.998552 kernel: Detected PIPT I-cache on CPU1 Sep 6 00:05:41.998574 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 6 00:05:41.998590 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Sep 6 00:05:41.998606 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 6 00:05:41.998622 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:05:41.998638 kernel: SMP: Total of 2 processors activated. Sep 6 00:05:41.998654 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 00:05:41.998670 kernel: CPU features: detected: 32-bit EL1 Support Sep 6 00:05:41.998686 kernel: CPU features: detected: CRC32 instructions Sep 6 00:05:41.998702 kernel: CPU: All CPU(s) started at EL1 Sep 6 00:05:41.998724 kernel: alternatives: patching kernel code Sep 6 00:05:41.998741 kernel: devtmpfs: initialized Sep 6 00:05:41.998768 kernel: KASLR disabled due to lack of seed Sep 6 00:05:41.998789 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:05:41.998805 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:05:41.998822 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:05:41.998839 kernel: SMBIOS 3.0.0 present. Sep 6 00:05:41.998855 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 6 00:05:41.998872 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:05:41.998936 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 00:05:41.998957 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 00:05:41.998981 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 00:05:41.998997 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:05:41.999014 kernel: audit: type=2000 audit(0.295:1): state=initialized audit_enabled=0 res=1 Sep 6 00:05:41.999031 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:05:41.999048 kernel: cpuidle: using governor menu Sep 6 00:05:41.999069 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 00:05:41.999086 kernel: ASID allocator initialised with 32768 entries Sep 6 00:05:41.999103 kernel: ACPI: bus type PCI registered Sep 6 00:05:41.999121 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:05:41.999137 kernel: Serial: AMBA PL011 UART driver Sep 6 00:05:41.999154 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:05:41.999170 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 00:05:41.999187 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:05:41.999204 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 00:05:41.999226 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:05:41.999243 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 00:05:41.999260 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:05:41.999276 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:05:41.999293 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:05:41.999309 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:05:41.999326 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:05:41.999343 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:05:41.999359 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:05:41.999375 kernel: ACPI: Interpreter enabled Sep 6 00:05:41.999396 kernel: ACPI: Using GIC for interrupt routing Sep 6 00:05:41.999413 kernel: ACPI: MCFG table detected, 1 entries Sep 6 00:05:41.999429 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 6 00:05:41.999755 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:05:42.000001 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 00:05:42.000195 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 00:05:42.000386 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 6 00:05:42.000583 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 6 00:05:42.000605 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 6 00:05:42.000622 kernel: acpiphp: Slot [1] registered Sep 6 00:05:42.000639 kernel: acpiphp: Slot [2] registered Sep 6 00:05:42.000656 kernel: acpiphp: Slot [3] registered Sep 6 00:05:42.000672 kernel: acpiphp: Slot [4] registered Sep 6 00:05:42.000689 kernel: acpiphp: Slot [5] registered Sep 6 00:05:42.000705 kernel: acpiphp: Slot [6] registered Sep 6 00:05:42.000721 kernel: acpiphp: Slot [7] registered Sep 6 00:05:42.000743 kernel: acpiphp: Slot [8] registered Sep 6 00:05:42.000760 kernel: acpiphp: Slot [9] registered Sep 6 00:05:42.000776 kernel: acpiphp: Slot [10] registered Sep 6 00:05:42.000793 kernel: acpiphp: Slot [11] registered Sep 6 00:05:42.000809 kernel: acpiphp: Slot [12] registered Sep 6 00:05:42.000825 kernel: acpiphp: Slot [13] registered Sep 6 00:05:42.000842 kernel: acpiphp: Slot [14] registered Sep 6 00:05:42.000858 kernel: acpiphp: Slot [15] registered Sep 6 00:05:42.000874 kernel: acpiphp: Slot [16] registered Sep 6 00:05:42.014848 kernel: acpiphp: Slot [17] registered Sep 6 00:05:42.014903 kernel: acpiphp: Slot [18] registered Sep 6 00:05:42.014924 kernel: acpiphp: Slot [19] registered Sep 6 00:05:42.014941 kernel: acpiphp: Slot [20] registered Sep 6 00:05:42.014957 kernel: acpiphp: Slot [21] registered Sep 6 00:05:42.014975 kernel: acpiphp: Slot [22] registered Sep 6 00:05:42.014991 kernel: acpiphp: Slot [23] registered Sep 6 00:05:42.015008 kernel: acpiphp: Slot [24] registered Sep 6 00:05:42.015024 kernel: acpiphp: Slot [25] registered Sep 6 00:05:42.015041 kernel: acpiphp: Slot [26] registered Sep 6 00:05:42.015067 kernel: acpiphp: Slot [27] registered Sep 6 00:05:42.015083 kernel: acpiphp: Slot [28] registered Sep 6 00:05:42.015099 kernel: acpiphp: Slot [29] registered Sep 6 00:05:42.015116 kernel: acpiphp: Slot [30] registered Sep 6 00:05:42.015132 kernel: acpiphp: Slot [31] registered Sep 6 00:05:42.015149 kernel: PCI host bridge to bus 0000:00 Sep 6 00:05:42.015393 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 6 00:05:42.015573 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 6 00:05:42.015753 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 6 00:05:42.015980 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 6 00:05:42.016217 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 6 00:05:42.016431 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 6 00:05:42.016648 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 6 00:05:42.016877 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 6 00:05:42.017224 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 6 00:05:42.017424 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 6 00:05:42.017634 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 6 00:05:42.017831 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 6 00:05:42.018069 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 6 00:05:42.018265 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 6 00:05:42.018466 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 6 00:05:42.018670 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 6 00:05:42.018861 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 6 00:05:42.019165 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 6 00:05:42.019362 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 6 00:05:42.019561 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 6 00:05:42.019740 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 6 00:05:42.019942 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 6 00:05:42.020129 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 6 00:05:42.020153 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 6 00:05:42.020170 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 6 00:05:42.020188 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 6 00:05:42.020205 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 6 00:05:42.020222 kernel: iommu: Default domain type: Translated Sep 6 00:05:42.020239 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 00:05:42.020255 kernel: vgaarb: loaded Sep 6 00:05:42.020272 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:05:42.020293 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:05:42.020310 kernel: PTP clock support registered Sep 6 00:05:42.020326 kernel: Registered efivars operations Sep 6 00:05:42.020343 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 00:05:42.020359 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:05:42.020376 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:05:42.020392 kernel: pnp: PnP ACPI init Sep 6 00:05:42.020587 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 6 00:05:42.020612 kernel: pnp: PnP ACPI: found 1 devices Sep 6 00:05:42.020634 kernel: NET: Registered PF_INET protocol family Sep 6 00:05:42.020651 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:05:42.020668 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:05:42.020685 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:05:42.020702 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:05:42.020719 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 00:05:42.020736 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:05:42.020753 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:05:42.020773 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:05:42.020790 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:05:42.020807 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:05:42.020824 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 6 00:05:42.020840 kernel: kvm [1]: HYP mode not available Sep 6 00:05:42.020856 kernel: Initialise system trusted keyrings Sep 6 00:05:42.020873 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:05:42.020919 kernel: Key type asymmetric registered Sep 6 00:05:42.020938 kernel: Asymmetric key parser 'x509' registered Sep 6 00:05:42.020960 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:05:42.020978 kernel: io scheduler mq-deadline registered Sep 6 00:05:42.020995 kernel: io scheduler kyber registered Sep 6 00:05:42.021011 kernel: io scheduler bfq registered Sep 6 00:05:42.021251 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 6 00:05:42.021280 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 6 00:05:42.021297 kernel: ACPI: button: Power Button [PWRB] Sep 6 00:05:42.021314 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 6 00:05:42.021331 kernel: ACPI: button: Sleep Button [SLPB] Sep 6 00:05:42.021354 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:05:42.021371 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 6 00:05:42.021574 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 6 00:05:42.021597 kernel: printk: console [ttyS0] disabled Sep 6 00:05:42.021615 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 6 00:05:42.021632 kernel: printk: console [ttyS0] enabled Sep 6 00:05:42.021649 kernel: printk: bootconsole [uart0] disabled Sep 6 00:05:42.021665 kernel: thunder_xcv, ver 1.0 Sep 6 00:05:42.021682 kernel: thunder_bgx, ver 1.0 Sep 6 00:05:42.021717 kernel: nicpf, ver 1.0 Sep 6 00:05:42.021735 kernel: nicvf, ver 1.0 Sep 6 00:05:42.025516 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 00:05:42.025728 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T00:05:41 UTC (1757117141) Sep 6 00:05:42.025752 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 00:05:42.025769 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:05:42.025786 kernel: Segment Routing with IPv6 Sep 6 00:05:42.025804 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:05:42.025828 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:05:42.025845 kernel: Key type dns_resolver registered Sep 6 00:05:42.025872 kernel: registered taskstats version 1 Sep 6 00:05:42.025914 kernel: Loading compiled-in X.509 certificates Sep 6 00:05:42.025934 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 72ab5ba99c2368429c7a4d04fccfc5a39dd84386' Sep 6 00:05:42.025951 kernel: Key type .fscrypt registered Sep 6 00:05:42.025968 kernel: Key type fscrypt-provisioning registered Sep 6 00:05:42.025984 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:05:42.026001 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:05:42.026023 kernel: ima: No architecture policies found Sep 6 00:05:42.026040 kernel: clk: Disabling unused clocks Sep 6 00:05:42.026056 kernel: Freeing unused kernel memory: 36416K Sep 6 00:05:42.026072 kernel: Run /init as init process Sep 6 00:05:42.026089 kernel: with arguments: Sep 6 00:05:42.026105 kernel: /init Sep 6 00:05:42.026121 kernel: with environment: Sep 6 00:05:42.026137 kernel: HOME=/ Sep 6 00:05:42.026154 kernel: TERM=linux Sep 6 00:05:42.026174 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:05:42.026195 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:05:42.026217 systemd[1]: Detected virtualization amazon. Sep 6 00:05:42.026236 systemd[1]: Detected architecture arm64. Sep 6 00:05:42.026253 systemd[1]: Running in initrd. Sep 6 00:05:42.026271 systemd[1]: No hostname configured, using default hostname. Sep 6 00:05:42.026289 systemd[1]: Hostname set to . Sep 6 00:05:42.026311 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:05:42.026329 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:05:42.026347 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:05:42.026365 systemd[1]: Reached target cryptsetup.target. Sep 6 00:05:42.026382 systemd[1]: Reached target paths.target. Sep 6 00:05:42.026400 systemd[1]: Reached target slices.target. Sep 6 00:05:42.026417 systemd[1]: Reached target swap.target. Sep 6 00:05:42.026435 systemd[1]: Reached target timers.target. Sep 6 00:05:42.026458 systemd[1]: Listening on iscsid.socket. Sep 6 00:05:42.026477 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:05:42.026495 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:05:42.026513 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:05:42.026531 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:05:42.026549 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:05:42.026566 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:05:42.026584 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:05:42.026602 systemd[1]: Reached target sockets.target. Sep 6 00:05:42.026624 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:05:42.026642 systemd[1]: Finished network-cleanup.service. Sep 6 00:05:42.026660 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:05:42.026677 systemd[1]: Starting systemd-journald.service... Sep 6 00:05:42.026695 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:05:42.026713 systemd[1]: Starting systemd-resolved.service... Sep 6 00:05:42.026731 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:05:42.026749 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:05:42.026771 kernel: audit: type=1130 audit(1757117141.986:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.026790 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:05:42.026808 kernel: audit: type=1130 audit(1757117142.004:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.026838 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:05:42.026859 kernel: audit: type=1130 audit(1757117142.022:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.026879 systemd-journald[310]: Journal started Sep 6 00:05:42.026994 systemd-journald[310]: Runtime Journal (/run/log/journal/ec2acc825cbd4c158669e93d8aaf5236) is 8.0M, max 75.4M, 67.4M free. Sep 6 00:05:41.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:41.961764 systemd-modules-load[311]: Inserted module 'overlay' Sep 6 00:05:42.037985 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:05:42.046705 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:05:42.052998 systemd[1]: Started systemd-journald.service. Sep 6 00:05:42.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.083429 kernel: audit: type=1130 audit(1757117142.052:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.086052 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:05:42.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.105767 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:05:42.121738 kernel: audit: type=1130 audit(1757117142.086:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.121776 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:05:42.121802 kernel: audit: type=1130 audit(1757117142.106:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.109338 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:05:42.121695 systemd-resolved[312]: Positive Trust Anchors: Sep 6 00:05:42.121710 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:05:42.121770 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:05:42.148371 systemd-modules-load[311]: Inserted module 'br_netfilter' Sep 6 00:05:42.151145 kernel: Bridge firewalling registered Sep 6 00:05:42.167482 dracut-cmdline[328]: dracut-dracut-053 Sep 6 00:05:42.176117 dracut-cmdline[328]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:05:42.192931 kernel: SCSI subsystem initialized Sep 6 00:05:42.225919 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:05:42.225986 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:05:42.231930 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:05:42.240808 systemd-modules-load[311]: Inserted module 'dm_multipath' Sep 6 00:05:42.244436 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:05:42.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.259865 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:05:42.261612 kernel: audit: type=1130 audit(1757117142.248:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.279179 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:05:42.290278 kernel: audit: type=1130 audit(1757117142.277:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.342924 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:05:42.363920 kernel: iscsi: registered transport (tcp) Sep 6 00:05:42.392176 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:05:42.392248 kernel: QLogic iSCSI HBA Driver Sep 6 00:05:42.592951 kernel: random: crng init done Sep 6 00:05:42.593314 systemd-resolved[312]: Defaulting to hostname 'linux'. Sep 6 00:05:42.597260 systemd[1]: Started systemd-resolved.service. Sep 6 00:05:42.610262 kernel: audit: type=1130 audit(1757117142.597:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.599275 systemd[1]: Reached target nss-lookup.target. Sep 6 00:05:42.627315 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:05:42.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:42.633961 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:05:42.702923 kernel: raid6: neonx8 gen() 6203 MB/s Sep 6 00:05:42.718927 kernel: raid6: neonx8 xor() 4656 MB/s Sep 6 00:05:42.736934 kernel: raid6: neonx4 gen() 6484 MB/s Sep 6 00:05:42.754923 kernel: raid6: neonx4 xor() 4831 MB/s Sep 6 00:05:42.772919 kernel: raid6: neonx2 gen() 5780 MB/s Sep 6 00:05:42.790920 kernel: raid6: neonx2 xor() 4481 MB/s Sep 6 00:05:42.808922 kernel: raid6: neonx1 gen() 4460 MB/s Sep 6 00:05:42.826945 kernel: raid6: neonx1 xor() 3646 MB/s Sep 6 00:05:42.844938 kernel: raid6: int64x8 gen() 3375 MB/s Sep 6 00:05:42.862923 kernel: raid6: int64x8 xor() 2077 MB/s Sep 6 00:05:42.880922 kernel: raid6: int64x4 gen() 3783 MB/s Sep 6 00:05:42.898921 kernel: raid6: int64x4 xor() 2184 MB/s Sep 6 00:05:42.916927 kernel: raid6: int64x2 gen() 3593 MB/s Sep 6 00:05:42.934937 kernel: raid6: int64x2 xor() 1938 MB/s Sep 6 00:05:42.952936 kernel: raid6: int64x1 gen() 2748 MB/s Sep 6 00:05:42.972340 kernel: raid6: int64x1 xor() 1441 MB/s Sep 6 00:05:42.972379 kernel: raid6: using algorithm neonx4 gen() 6484 MB/s Sep 6 00:05:42.972403 kernel: raid6: .... xor() 4831 MB/s, rmw enabled Sep 6 00:05:42.974147 kernel: raid6: using neon recovery algorithm Sep 6 00:05:42.992934 kernel: xor: measuring software checksum speed Sep 6 00:05:42.994921 kernel: 8regs : 8590 MB/sec Sep 6 00:05:42.994966 kernel: 32regs : 10403 MB/sec Sep 6 00:05:42.998493 kernel: arm64_neon : 9162 MB/sec Sep 6 00:05:42.998524 kernel: xor: using function: 32regs (10403 MB/sec) Sep 6 00:05:43.097926 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 6 00:05:43.115776 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:05:43.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:43.120000 audit: BPF prog-id=7 op=LOAD Sep 6 00:05:43.120000 audit: BPF prog-id=8 op=LOAD Sep 6 00:05:43.122206 systemd[1]: Starting systemd-udevd.service... Sep 6 00:05:43.152234 systemd-udevd[510]: Using default interface naming scheme 'v252'. Sep 6 00:05:43.163684 systemd[1]: Started systemd-udevd.service. Sep 6 00:05:43.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:43.168398 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:05:43.200321 dracut-pre-trigger[520]: rd.md=0: removing MD RAID activation Sep 6 00:05:43.263480 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:05:43.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:43.268281 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:05:43.370110 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:05:43.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:43.498932 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 6 00:05:43.498994 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 6 00:05:43.523978 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 6 00:05:43.524866 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 6 00:05:43.524928 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 6 00:05:43.525187 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 6 00:05:43.525435 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:07:0b:10:a7:87 Sep 6 00:05:43.525640 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 6 00:05:43.531278 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:05:43.531325 kernel: GPT:9289727 != 16777215 Sep 6 00:05:43.531348 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:05:43.533447 kernel: GPT:9289727 != 16777215 Sep 6 00:05:43.534736 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:05:43.538231 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:05:43.543154 (udev-worker)[568]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:05:43.612923 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (559) Sep 6 00:05:43.705800 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:05:43.727854 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:05:43.740736 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:05:43.745960 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:05:43.760303 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:05:43.781044 systemd[1]: Starting disk-uuid.service... Sep 6 00:05:43.796736 disk-uuid[672]: Primary Header is updated. Sep 6 00:05:43.796736 disk-uuid[672]: Secondary Entries is updated. Sep 6 00:05:43.796736 disk-uuid[672]: Secondary Header is updated. Sep 6 00:05:43.807747 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:05:44.826926 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:05:44.827478 disk-uuid[673]: The operation has completed successfully. Sep 6 00:05:45.007613 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:05:45.008271 systemd[1]: Finished disk-uuid.service. Sep 6 00:05:45.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:45.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:45.024589 systemd[1]: Starting verity-setup.service... Sep 6 00:05:45.065626 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 00:05:45.160254 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:05:45.165920 systemd[1]: Finished verity-setup.service. Sep 6 00:05:45.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:45.176523 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:05:45.277249 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:05:45.272545 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:05:45.274485 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:05:45.275758 systemd[1]: Starting ignition-setup.service... Sep 6 00:05:45.278851 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:05:45.322060 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:05:45.322131 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:05:45.324239 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:05:45.334932 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:05:45.353734 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:05:45.368585 systemd[1]: Finished ignition-setup.service. Sep 6 00:05:45.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:45.372145 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:05:45.448224 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:05:45.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:45.452000 audit: BPF prog-id=9 op=LOAD Sep 6 00:05:45.454331 systemd[1]: Starting systemd-networkd.service... Sep 6 00:05:45.505379 systemd-networkd[1018]: lo: Link UP Sep 6 00:05:45.505401 systemd-networkd[1018]: lo: Gained carrier Sep 6 00:05:45.507042 systemd-networkd[1018]: Enumeration completed Sep 6 00:05:45.507914 systemd-networkd[1018]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:05:45.509599 systemd[1]: Started systemd-networkd.service. Sep 6 00:05:45.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:45.520250 systemd[1]: Reached target network.target. Sep 6 00:05:45.522091 systemd-networkd[1018]: eth0: Link UP Sep 6 00:05:45.522099 systemd-networkd[1018]: eth0: Gained carrier Sep 6 00:05:45.523690 systemd[1]: Starting iscsiuio.service... Sep 6 00:05:45.544064 systemd-networkd[1018]: eth0: DHCPv4 address 172.31.27.196/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 6 00:05:45.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:45.544093 systemd[1]: Started iscsiuio.service. Sep 6 00:05:45.553692 systemd[1]: Starting iscsid.service... Sep 6 00:05:45.564532 iscsid[1023]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:05:45.564532 iscsid[1023]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:05:45.564532 iscsid[1023]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:05:45.564532 iscsid[1023]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:05:45.564532 iscsid[1023]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:05:45.586553 iscsid[1023]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:05:45.595060 systemd[1]: Started iscsid.service. Sep 6 00:05:45.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:45.601521 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:05:45.629074 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:05:45.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:45.631210 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:05:45.633043 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:05:45.636019 systemd[1]: Reached target remote-fs.target. Sep 6 00:05:45.645703 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:05:45.665781 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:05:45.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.140631 ignition[955]: Ignition 2.14.0 Sep 6 00:05:46.140735 ignition[955]: Stage: fetch-offline Sep 6 00:05:46.141544 ignition[955]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:05:46.141611 ignition[955]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:05:46.163444 ignition[955]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:05:46.164386 ignition[955]: Ignition finished successfully Sep 6 00:05:46.172422 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:05:46.193048 kernel: kauditd_printk_skb: 18 callbacks suppressed Sep 6 00:05:46.194484 kernel: audit: type=1130 audit(1757117146.172:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.176357 systemd[1]: Starting ignition-fetch.service... Sep 6 00:05:46.204001 ignition[1042]: Ignition 2.14.0 Sep 6 00:05:46.204029 ignition[1042]: Stage: fetch Sep 6 00:05:46.204318 ignition[1042]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:05:46.204375 ignition[1042]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:05:46.224938 ignition[1042]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:05:46.227646 ignition[1042]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:05:46.237712 ignition[1042]: INFO : PUT result: OK Sep 6 00:05:46.241207 ignition[1042]: DEBUG : parsed url from cmdline: "" Sep 6 00:05:46.243276 ignition[1042]: INFO : no config URL provided Sep 6 00:05:46.243276 ignition[1042]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:05:46.247670 ignition[1042]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 6 00:05:46.247670 ignition[1042]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:05:46.253199 ignition[1042]: INFO : PUT result: OK Sep 6 00:05:46.253199 ignition[1042]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 6 00:05:46.257468 ignition[1042]: INFO : GET result: OK Sep 6 00:05:46.257468 ignition[1042]: DEBUG : parsing config with SHA512: 60a9fcd4268bdeb9bc31862a3941e03e8fb7485d623abc5177e2e93714a3b7c886b0268ee0d8c55d2ecaefe79c3ba9f5b89a8f5353ce69b30766eb064f6a57d6 Sep 6 00:05:46.267472 unknown[1042]: fetched base config from "system" Sep 6 00:05:46.267501 unknown[1042]: fetched base config from "system" Sep 6 00:05:46.267516 unknown[1042]: fetched user config from "aws" Sep 6 00:05:46.271851 ignition[1042]: fetch: fetch complete Sep 6 00:05:46.271865 ignition[1042]: fetch: fetch passed Sep 6 00:05:46.276336 ignition[1042]: Ignition finished successfully Sep 6 00:05:46.279064 systemd[1]: Finished ignition-fetch.service. Sep 6 00:05:46.296095 kernel: audit: type=1130 audit(1757117146.279:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.283711 systemd[1]: Starting ignition-kargs.service... Sep 6 00:05:46.307485 ignition[1048]: Ignition 2.14.0 Sep 6 00:05:46.309246 ignition[1048]: Stage: kargs Sep 6 00:05:46.310838 ignition[1048]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:05:46.313343 ignition[1048]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:05:46.324181 ignition[1048]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:05:46.326726 ignition[1048]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:05:46.329456 ignition[1048]: INFO : PUT result: OK Sep 6 00:05:46.334424 ignition[1048]: kargs: kargs passed Sep 6 00:05:46.334775 ignition[1048]: Ignition finished successfully Sep 6 00:05:46.342779 systemd[1]: Finished ignition-kargs.service. Sep 6 00:05:46.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.357332 ignition[1054]: Ignition 2.14.0 Sep 6 00:05:46.344436 systemd[1]: Starting ignition-disks.service... Sep 6 00:05:46.373826 kernel: audit: type=1130 audit(1757117146.341:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.357355 ignition[1054]: Stage: disks Sep 6 00:05:46.376865 ignition[1054]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:05:46.357827 ignition[1054]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:05:46.385436 ignition[1054]: INFO : PUT result: OK Sep 6 00:05:46.357935 ignition[1054]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:05:46.388758 systemd[1]: Finished ignition-disks.service. Sep 6 00:05:46.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.372665 ignition[1054]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:05:46.403447 kernel: audit: type=1130 audit(1757117146.388:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.390862 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:05:46.387095 ignition[1054]: disks: disks passed Sep 6 00:05:46.401387 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:05:46.387187 ignition[1054]: Ignition finished successfully Sep 6 00:05:46.405249 systemd[1]: Reached target local-fs.target. Sep 6 00:05:46.408523 systemd[1]: Reached target sysinit.target. Sep 6 00:05:46.411386 systemd[1]: Reached target basic.target. Sep 6 00:05:46.415018 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:05:46.467467 systemd-fsck[1062]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 6 00:05:46.473856 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:05:46.486261 kernel: audit: type=1130 audit(1757117146.474:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.478333 systemd[1]: Mounting sysroot.mount... Sep 6 00:05:46.511943 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:05:46.514302 systemd[1]: Mounted sysroot.mount. Sep 6 00:05:46.514607 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:05:46.529235 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:05:46.533540 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:05:46.535256 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:05:46.535312 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:05:46.550151 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:05:46.575777 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:05:46.580971 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:05:46.607803 initrd-setup-root[1084]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:05:46.611554 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1079) Sep 6 00:05:46.620176 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:05:46.620252 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:05:46.622402 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:05:46.627399 initrd-setup-root[1108]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:05:46.634919 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:05:46.640367 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:05:46.644805 initrd-setup-root[1118]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:05:46.653512 initrd-setup-root[1126]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:05:46.871323 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:05:46.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.880337 systemd[1]: Starting ignition-mount.service... Sep 6 00:05:46.883329 kernel: audit: type=1130 audit(1757117146.874:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.888306 systemd[1]: Starting sysroot-boot.service... Sep 6 00:05:46.903186 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 00:05:46.905571 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 00:05:46.930350 ignition[1145]: INFO : Ignition 2.14.0 Sep 6 00:05:46.932520 ignition[1145]: INFO : Stage: mount Sep 6 00:05:46.934501 ignition[1145]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:05:46.937488 ignition[1145]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:05:46.951161 systemd[1]: Finished sysroot-boot.service. Sep 6 00:05:46.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.960045 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:05:46.960045 ignition[1145]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:05:46.967140 kernel: audit: type=1130 audit(1757117146.953:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.967624 ignition[1145]: INFO : PUT result: OK Sep 6 00:05:46.972634 ignition[1145]: INFO : mount: mount passed Sep 6 00:05:46.974449 ignition[1145]: INFO : Ignition finished successfully Sep 6 00:05:46.978157 systemd[1]: Finished ignition-mount.service. Sep 6 00:05:46.989044 kernel: audit: type=1130 audit(1757117146.978:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.981409 systemd[1]: Starting ignition-files.service... Sep 6 00:05:47.000254 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:05:47.024245 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1154) Sep 6 00:05:47.029849 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:05:47.029915 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:05:47.029942 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:05:47.039921 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:05:47.044488 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:05:47.063017 ignition[1173]: INFO : Ignition 2.14.0 Sep 6 00:05:47.063017 ignition[1173]: INFO : Stage: files Sep 6 00:05:47.070861 ignition[1173]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:05:47.070861 ignition[1173]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:05:47.086341 ignition[1173]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:05:47.088994 ignition[1173]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:05:47.091992 ignition[1173]: INFO : PUT result: OK Sep 6 00:05:47.100862 ignition[1173]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:05:47.108223 ignition[1173]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:05:47.111497 ignition[1173]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:05:47.147509 ignition[1173]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:05:47.153488 ignition[1173]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:05:47.158935 unknown[1173]: wrote ssh authorized keys file for user: core Sep 6 00:05:47.161314 ignition[1173]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:05:47.171286 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 6 00:05:47.175183 ignition[1173]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:05:47.186538 ignition[1173]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem543335524" Sep 6 00:05:47.189552 ignition[1173]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem543335524": device or resource busy Sep 6 00:05:47.189552 ignition[1173]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem543335524", trying btrfs: device or resource busy Sep 6 00:05:47.189552 ignition[1173]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem543335524" Sep 6 00:05:47.199869 ignition[1173]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem543335524" Sep 6 00:05:47.203531 ignition[1173]: INFO : op(3): [started] unmounting "/mnt/oem543335524" Sep 6 00:05:47.205973 ignition[1173]: INFO : op(3): [finished] unmounting "/mnt/oem543335524" Sep 6 00:05:47.205973 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 6 00:05:47.205973 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:05:47.215697 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:05:47.219604 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:05:47.223442 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:05:47.227233 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 00:05:47.227233 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 00:05:47.227233 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 6 00:05:47.241677 ignition[1173]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:05:47.254858 ignition[1173]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2739305226" Sep 6 00:05:47.254858 ignition[1173]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2739305226": device or resource busy Sep 6 00:05:47.254858 ignition[1173]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2739305226", trying btrfs: device or resource busy Sep 6 00:05:47.254858 ignition[1173]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2739305226" Sep 6 00:05:47.269094 ignition[1173]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2739305226" Sep 6 00:05:47.269094 ignition[1173]: INFO : op(6): [started] unmounting "/mnt/oem2739305226" Sep 6 00:05:47.269094 ignition[1173]: INFO : op(6): [finished] unmounting "/mnt/oem2739305226" Sep 6 00:05:47.269094 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 6 00:05:47.269094 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 6 00:05:47.269094 ignition[1173]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:05:47.273221 systemd-networkd[1018]: eth0: Gained IPv6LL Sep 6 00:05:47.295224 ignition[1173]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3537588261" Sep 6 00:05:47.298255 ignition[1173]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3537588261": device or resource busy Sep 6 00:05:47.298255 ignition[1173]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3537588261", trying btrfs: device or resource busy Sep 6 00:05:47.312697 ignition[1173]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3537588261" Sep 6 00:05:47.312697 ignition[1173]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3537588261" Sep 6 00:05:47.312697 ignition[1173]: INFO : op(9): [started] unmounting "/mnt/oem3537588261" Sep 6 00:05:47.312697 ignition[1173]: INFO : op(9): [finished] unmounting "/mnt/oem3537588261" Sep 6 00:05:47.312697 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 6 00:05:47.312697 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:05:47.312697 ignition[1173]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:05:47.339722 ignition[1173]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1965532057" Sep 6 00:05:47.339722 ignition[1173]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1965532057": device or resource busy Sep 6 00:05:47.339722 ignition[1173]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1965532057", trying btrfs: device or resource busy Sep 6 00:05:47.339722 ignition[1173]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1965532057" Sep 6 00:05:47.339722 ignition[1173]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1965532057" Sep 6 00:05:47.339722 ignition[1173]: INFO : op(c): [started] unmounting "/mnt/oem1965532057" Sep 6 00:05:47.339722 ignition[1173]: INFO : op(c): [finished] unmounting "/mnt/oem1965532057" Sep 6 00:05:47.339722 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:05:47.339722 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 00:05:47.339722 ignition[1173]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 6 00:05:47.895957 ignition[1173]: INFO : GET result: OK Sep 6 00:05:48.509255 ignition[1173]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 00:05:48.509255 ignition[1173]: INFO : files: op(b): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:05:48.517797 ignition[1173]: INFO : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:05:48.517797 ignition[1173]: INFO : files: op(c): [started] processing unit "amazon-ssm-agent.service" Sep 6 00:05:48.517797 ignition[1173]: INFO : files: op(c): op(d): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 6 00:05:48.517797 ignition[1173]: INFO : files: op(c): op(d): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 6 00:05:48.517797 ignition[1173]: INFO : files: op(c): [finished] processing unit "amazon-ssm-agent.service" Sep 6 00:05:48.517797 ignition[1173]: INFO : files: op(e): [started] processing unit "nvidia.service" Sep 6 00:05:48.517797 ignition[1173]: INFO : files: op(e): [finished] processing unit "nvidia.service" Sep 6 00:05:48.517797 ignition[1173]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:05:48.517797 ignition[1173]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:05:48.517797 ignition[1173]: INFO : files: op(10): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 6 00:05:48.517797 ignition[1173]: INFO : files: op(10): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 6 00:05:48.517797 ignition[1173]: INFO : files: op(11): [started] setting preset to enabled for "nvidia.service" Sep 6 00:05:48.517797 ignition[1173]: INFO : files: op(11): [finished] setting preset to enabled for "nvidia.service" Sep 6 00:05:48.559513 ignition[1173]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:05:48.563387 ignition[1173]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:05:48.567045 ignition[1173]: INFO : files: files passed Sep 6 00:05:48.567045 ignition[1173]: INFO : Ignition finished successfully Sep 6 00:05:48.572947 systemd[1]: Finished ignition-files.service. Sep 6 00:05:48.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.585929 kernel: audit: type=1130 audit(1757117148.573:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.593525 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:05:48.595599 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:05:48.603242 systemd[1]: Starting ignition-quench.service... Sep 6 00:05:48.612628 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:05:48.615016 systemd[1]: Finished ignition-quench.service. Sep 6 00:05:48.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.625921 kernel: audit: type=1130 audit(1757117148.616:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.632422 initrd-setup-root-after-ignition[1198]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:05:48.637254 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:05:48.641528 systemd[1]: Reached target ignition-complete.target. Sep 6 00:05:48.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.646441 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:05:48.674710 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:05:48.674966 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:05:48.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.679410 systemd[1]: Reached target initrd-fs.target. Sep 6 00:05:48.683817 systemd[1]: Reached target initrd.target. Sep 6 00:05:48.687928 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:05:48.690604 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:05:48.716753 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:05:48.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.721681 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:05:48.743613 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:05:48.747291 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:05:48.751197 systemd[1]: Stopped target timers.target. Sep 6 00:05:48.754581 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:05:48.755265 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:05:48.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.760139 systemd[1]: Stopped target initrd.target. Sep 6 00:05:48.765742 systemd[1]: Stopped target basic.target. Sep 6 00:05:48.768943 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:05:48.772632 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:05:48.776409 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:05:48.780276 systemd[1]: Stopped target remote-fs.target. Sep 6 00:05:48.783514 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:05:48.786963 systemd[1]: Stopped target sysinit.target. Sep 6 00:05:48.790164 systemd[1]: Stopped target local-fs.target. Sep 6 00:05:48.793488 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:05:48.796846 systemd[1]: Stopped target swap.target. Sep 6 00:05:48.801606 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:05:48.803767 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:05:48.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.807331 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:05:48.810585 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:05:48.812692 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:05:48.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.816305 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:05:48.818831 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:05:48.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.824317 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:05:48.826422 systemd[1]: Stopped ignition-files.service. Sep 6 00:05:48.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.831495 systemd[1]: Stopping ignition-mount.service... Sep 6 00:05:48.852498 ignition[1211]: INFO : Ignition 2.14.0 Sep 6 00:05:48.852498 ignition[1211]: INFO : Stage: umount Sep 6 00:05:48.852498 ignition[1211]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:05:48.852498 ignition[1211]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:05:48.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.874640 iscsid[1023]: iscsid shutting down. Sep 6 00:05:48.856635 systemd[1]: Stopping iscsid.service... Sep 6 00:05:48.878162 ignition[1211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:05:48.878162 ignition[1211]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:05:48.859698 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:05:48.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.899510 ignition[1211]: INFO : PUT result: OK Sep 6 00:05:48.899510 ignition[1211]: INFO : umount: umount passed Sep 6 00:05:48.899510 ignition[1211]: INFO : Ignition finished successfully Sep 6 00:05:48.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.861784 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:05:48.874422 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:05:48.887529 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:05:48.887846 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:05:48.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.893477 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:05:48.893695 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:05:48.904718 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:05:48.907747 systemd[1]: Stopped iscsid.service. Sep 6 00:05:48.921394 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:05:48.923522 systemd[1]: Stopped ignition-mount.service. Sep 6 00:05:48.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.935070 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:05:48.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.935262 systemd[1]: Stopped ignition-disks.service. Sep 6 00:05:48.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.939096 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:05:48.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.939210 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:05:48.947083 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:05:48.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.948249 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:05:48.950541 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:05:48.951665 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:05:48.954630 systemd[1]: Stopped target paths.target. Sep 6 00:05:48.968746 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:05:48.971560 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:05:48.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:48.981650 systemd[1]: Stopped target slices.target. Sep 6 00:05:48.983198 systemd[1]: Stopped target sockets.target. Sep 6 00:05:48.984868 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:05:48.984978 systemd[1]: Closed iscsid.socket. Sep 6 00:05:48.986795 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:05:48.986911 systemd[1]: Stopped ignition-setup.service. Sep 6 00:05:48.988820 systemd[1]: Stopping iscsiuio.service... Sep 6 00:05:49.008475 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:05:49.011316 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:05:49.012960 systemd[1]: Stopped iscsiuio.service. Sep 6 00:05:49.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.017018 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:05:49.017205 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:05:49.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.020546 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:05:49.020713 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:05:49.023938 systemd[1]: Stopped target network.target. Sep 6 00:05:49.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.025766 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:05:49.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.025842 systemd[1]: Closed iscsiuio.socket. Sep 6 00:05:49.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.027713 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:05:49.028066 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:05:49.032217 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:05:49.034630 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:05:49.037129 systemd-networkd[1018]: eth0: DHCPv6 lease lost Sep 6 00:05:49.071000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:05:49.039447 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:05:49.039646 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:05:49.043295 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:05:49.043368 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:05:49.048197 systemd[1]: Stopping network-cleanup.service... Sep 6 00:05:49.051493 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:05:49.051618 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:05:49.053609 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:05:49.053697 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:05:49.058008 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:05:49.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.100000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:05:49.058107 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:05:49.061612 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:05:49.088538 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:05:49.094408 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:05:49.094610 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:05:49.111360 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:05:49.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.111572 systemd[1]: Stopped network-cleanup.service. Sep 6 00:05:49.120938 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:05:49.121265 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:05:49.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.128086 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:05:49.128302 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:05:49.134091 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:05:49.134286 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:05:49.140038 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:05:49.140282 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:05:49.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.145828 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:05:49.146295 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:05:49.151733 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:05:49.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.152409 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:05:49.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.159203 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:05:49.171720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:05:49.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.171845 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:05:49.179476 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:05:49.182048 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:05:49.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:49.186407 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:05:49.191435 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:05:49.205475 systemd[1]: Switching root. Sep 6 00:05:49.232015 systemd-journald[310]: Journal stopped Sep 6 00:05:55.403429 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Sep 6 00:05:55.403558 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:05:55.403661 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:05:55.403699 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:05:55.403733 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:05:55.403764 kernel: SELinux: policy capability open_perms=1 Sep 6 00:05:55.403797 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:05:55.403828 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:05:55.403861 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:05:55.404003 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:05:55.404052 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:05:55.404086 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:05:55.404123 systemd[1]: Successfully loaded SELinux policy in 132.727ms. Sep 6 00:05:55.408069 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.668ms. Sep 6 00:05:55.408186 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:05:55.408221 systemd[1]: Detected virtualization amazon. Sep 6 00:05:55.408252 systemd[1]: Detected architecture arm64. Sep 6 00:05:55.408285 systemd[1]: Detected first boot. Sep 6 00:05:55.408326 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:05:55.408359 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:05:55.408398 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:05:55.408433 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:05:55.408470 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:05:55.408506 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:05:55.408536 kernel: kauditd_printk_skb: 56 callbacks suppressed Sep 6 00:05:55.408567 kernel: audit: type=1334 audit(1757117154.933:88): prog-id=12 op=LOAD Sep 6 00:05:55.408602 kernel: audit: type=1334 audit(1757117154.936:89): prog-id=3 op=UNLOAD Sep 6 00:05:55.408632 kernel: audit: type=1334 audit(1757117154.939:90): prog-id=13 op=LOAD Sep 6 00:05:55.408664 kernel: audit: type=1334 audit(1757117154.941:91): prog-id=14 op=LOAD Sep 6 00:05:55.408695 kernel: audit: type=1334 audit(1757117154.941:92): prog-id=4 op=UNLOAD Sep 6 00:05:55.408729 kernel: audit: type=1334 audit(1757117154.941:93): prog-id=5 op=UNLOAD Sep 6 00:05:55.408770 kernel: audit: type=1334 audit(1757117154.947:94): prog-id=15 op=LOAD Sep 6 00:05:55.408804 kernel: audit: type=1334 audit(1757117154.947:95): prog-id=12 op=UNLOAD Sep 6 00:05:55.408838 kernel: audit: type=1334 audit(1757117154.949:96): prog-id=16 op=LOAD Sep 6 00:05:55.408866 kernel: audit: type=1334 audit(1757117154.952:97): prog-id=17 op=LOAD Sep 6 00:05:55.408922 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:05:55.408957 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:05:55.408990 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:05:55.409024 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:05:55.409075 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:05:55.409110 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:05:55.409147 systemd[1]: Created slice system-getty.slice. Sep 6 00:05:55.409191 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:05:55.409223 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:05:55.409255 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:05:55.409286 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:05:55.409316 systemd[1]: Created slice user.slice. Sep 6 00:05:55.409346 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:05:55.409379 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:05:55.409413 systemd[1]: Set up automount boot.automount. Sep 6 00:05:55.409449 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:05:55.409483 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:05:55.409516 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:05:55.409546 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:05:55.409579 systemd[1]: Reached target integritysetup.target. Sep 6 00:05:55.409609 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:05:55.409648 systemd[1]: Reached target remote-fs.target. Sep 6 00:05:55.409680 systemd[1]: Reached target slices.target. Sep 6 00:05:55.409713 systemd[1]: Reached target swap.target. Sep 6 00:05:55.409747 systemd[1]: Reached target torcx.target. Sep 6 00:05:55.409777 systemd[1]: Reached target veritysetup.target. Sep 6 00:05:55.409808 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:05:55.409841 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:05:55.409875 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:05:55.409936 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:05:55.409971 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:05:55.410001 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:05:55.410032 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:05:55.410063 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:05:55.410099 systemd[1]: Mounting media.mount... Sep 6 00:05:55.410134 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:05:55.410165 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:05:55.410195 systemd[1]: Mounting tmp.mount... Sep 6 00:05:55.410227 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:05:55.410259 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:05:55.410290 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:05:55.410322 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:05:55.410369 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:05:55.410404 systemd[1]: Starting modprobe@drm.service... Sep 6 00:05:55.410435 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:05:55.410467 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:05:55.410498 systemd[1]: Starting modprobe@loop.service... Sep 6 00:05:55.410531 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:05:55.410562 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:05:55.410595 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:05:55.410625 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:05:55.410659 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:05:55.410692 systemd[1]: Stopped systemd-journald.service. Sep 6 00:05:55.410723 systemd[1]: Starting systemd-journald.service... Sep 6 00:05:55.410753 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:05:55.410783 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:05:55.410813 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:05:55.410844 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:05:55.410875 kernel: fuse: init (API version 7.34) Sep 6 00:05:55.410938 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:05:55.410973 systemd[1]: Stopped verity-setup.service. Sep 6 00:05:55.411009 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:05:55.411041 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:05:55.411079 systemd[1]: Mounted media.mount. Sep 6 00:05:55.411112 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:05:55.411143 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:05:55.411173 systemd[1]: Mounted tmp.mount. Sep 6 00:05:55.411207 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:05:55.411239 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:05:55.411270 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:05:55.411307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:05:55.411338 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:05:55.411368 kernel: loop: module loaded Sep 6 00:05:55.411398 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:05:55.411428 systemd[1]: Finished modprobe@drm.service. Sep 6 00:05:55.411464 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:05:55.411494 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:05:55.411525 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:05:55.411560 systemd-journald[1327]: Journal started Sep 6 00:05:55.411716 systemd-journald[1327]: Runtime Journal (/run/log/journal/ec2acc825cbd4c158669e93d8aaf5236) is 8.0M, max 75.4M, 67.4M free. Sep 6 00:05:55.411793 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:05:50.089000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:05:50.273000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:05:50.273000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:05:50.273000 audit: BPF prog-id=10 op=LOAD Sep 6 00:05:50.273000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:05:50.273000 audit: BPF prog-id=11 op=LOAD Sep 6 00:05:50.273000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:05:50.491000 audit[1245]: AVC avc: denied { associate } for pid=1245 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:05:50.491000 audit[1245]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001058cc a1=4000028e40 a2=4000027100 a3=32 items=0 ppid=1228 pid=1245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:50.491000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:05:50.496000 audit[1245]: AVC avc: denied { associate } for pid=1245 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:05:50.496000 audit[1245]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001059a9 a2=1ed a3=0 items=2 ppid=1228 pid=1245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:50.496000 audit: CWD cwd="/" Sep 6 00:05:50.496000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:05:50.496000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:05:50.496000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:05:54.933000 audit: BPF prog-id=12 op=LOAD Sep 6 00:05:54.936000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:05:54.939000 audit: BPF prog-id=13 op=LOAD Sep 6 00:05:54.941000 audit: BPF prog-id=14 op=LOAD Sep 6 00:05:54.941000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:05:54.941000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:05:54.947000 audit: BPF prog-id=15 op=LOAD Sep 6 00:05:54.947000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:05:54.949000 audit: BPF prog-id=16 op=LOAD Sep 6 00:05:54.952000 audit: BPF prog-id=17 op=LOAD Sep 6 00:05:54.952000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:05:54.952000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:05:54.954000 audit: BPF prog-id=18 op=LOAD Sep 6 00:05:54.954000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:05:54.957000 audit: BPF prog-id=19 op=LOAD Sep 6 00:05:54.959000 audit: BPF prog-id=20 op=LOAD Sep 6 00:05:54.959000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:05:54.959000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:05:54.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:54.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:54.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:54.970000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:05:55.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.260000 audit: BPF prog-id=21 op=LOAD Sep 6 00:05:55.261000 audit: BPF prog-id=22 op=LOAD Sep 6 00:05:55.261000 audit: BPF prog-id=23 op=LOAD Sep 6 00:05:55.261000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:05:55.261000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:05:55.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.398000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:05:55.398000 audit[1327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=fffffaf93400 a2=4000 a3=1 items=0 ppid=1 pid=1327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:55.398000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:05:55.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:50.487795 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:05:54.932116 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:05:50.488418 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:05:54.932145 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 6 00:05:50.488467 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:05:54.962136 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:05:50.488531 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:05:50.488557 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:05:50.488617 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:05:50.488647 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:05:50.489086 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:05:55.421994 systemd[1]: Started systemd-journald.service. Sep 6 00:05:50.489174 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:05:50.489210 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:05:50.491615 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:05:50.491699 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:05:50.491748 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:05:50.491788 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:05:50.491839 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:05:50.491878 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:05:53.961299 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:53Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:05:53.961822 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:53Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:05:55.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:53.962090 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:53Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:05:55.430755 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:05:53.962543 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:53Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:05:55.431125 systemd[1]: Finished modprobe@loop.service. Sep 6 00:05:53.962652 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:53Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:05:55.435705 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:05:53.962789 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2025-09-06T00:05:53Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:05:55.439128 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:05:55.441919 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:05:55.447313 systemd[1]: Reached target network-pre.target. Sep 6 00:05:55.453936 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:05:55.462430 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:05:55.466071 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:05:55.473365 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:05:55.479300 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:05:55.483171 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:05:55.485733 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:05:55.489596 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:05:55.492940 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:05:55.504585 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:05:55.509268 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:05:55.545383 systemd-journald[1327]: Time spent on flushing to /var/log/journal/ec2acc825cbd4c158669e93d8aaf5236 is 76.269ms for 1121 entries. Sep 6 00:05:55.545383 systemd-journald[1327]: System Journal (/var/log/journal/ec2acc825cbd4c158669e93d8aaf5236) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:05:55.647358 systemd-journald[1327]: Received client request to flush runtime journal. Sep 6 00:05:55.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.553738 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:05:55.558333 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:05:55.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.571010 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:05:55.584789 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:05:55.595400 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:05:55.649675 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:05:55.689248 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:05:55.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:55.696214 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:05:55.716299 udevadm[1363]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 6 00:05:55.747668 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:05:55.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:56.450218 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:05:56.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:56.454000 audit: BPF prog-id=24 op=LOAD Sep 6 00:05:56.454000 audit: BPF prog-id=25 op=LOAD Sep 6 00:05:56.454000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:05:56.454000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:05:56.458067 systemd[1]: Starting systemd-udevd.service... Sep 6 00:05:56.498599 systemd-udevd[1364]: Using default interface naming scheme 'v252'. Sep 6 00:05:56.569070 systemd[1]: Started systemd-udevd.service. Sep 6 00:05:56.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:56.576000 audit: BPF prog-id=26 op=LOAD Sep 6 00:05:56.582103 systemd[1]: Starting systemd-networkd.service... Sep 6 00:05:56.595000 audit: BPF prog-id=27 op=LOAD Sep 6 00:05:56.595000 audit: BPF prog-id=28 op=LOAD Sep 6 00:05:56.595000 audit: BPF prog-id=29 op=LOAD Sep 6 00:05:56.598971 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:05:56.674494 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 00:05:56.683735 (udev-worker)[1365]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:05:56.693197 systemd[1]: Started systemd-userdbd.service. Sep 6 00:05:56.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:56.941608 systemd-networkd[1367]: lo: Link UP Sep 6 00:05:56.942398 systemd-networkd[1367]: lo: Gained carrier Sep 6 00:05:56.943815 systemd-networkd[1367]: Enumeration completed Sep 6 00:05:56.944246 systemd[1]: Started systemd-networkd.service. Sep 6 00:05:56.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:56.949146 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:05:56.952550 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:05:56.959927 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:05:56.960615 systemd-networkd[1367]: eth0: Link UP Sep 6 00:05:56.961396 systemd-networkd[1367]: eth0: Gained carrier Sep 6 00:05:56.986220 systemd-networkd[1367]: eth0: DHCPv4 address 172.31.27.196/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 6 00:05:57.097777 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:05:57.101092 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:05:57.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.106463 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:05:57.160151 lvm[1483]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:05:57.202712 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:05:57.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.205159 systemd[1]: Reached target cryptsetup.target. Sep 6 00:05:57.209558 systemd[1]: Starting lvm2-activation.service... Sep 6 00:05:57.219079 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:05:57.259270 systemd[1]: Finished lvm2-activation.service. Sep 6 00:05:57.261573 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:05:57.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.264003 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:05:57.264079 systemd[1]: Reached target local-fs.target. Sep 6 00:05:57.266325 systemd[1]: Reached target machines.target. Sep 6 00:05:57.271186 systemd[1]: Starting ldconfig.service... Sep 6 00:05:57.274495 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:05:57.275023 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:05:57.277701 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:05:57.283044 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:05:57.291185 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:05:57.296380 systemd[1]: Starting systemd-sysext.service... Sep 6 00:05:57.301942 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1486 (bootctl) Sep 6 00:05:57.305224 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:05:57.342430 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:05:57.356957 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:05:57.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.369254 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:05:57.371401 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:05:57.397977 kernel: loop0: detected capacity change from 0 to 207008 Sep 6 00:05:57.477039 systemd-fsck[1496]: fsck.fat 4.2 (2021-01-31) Sep 6 00:05:57.477039 systemd-fsck[1496]: /dev/nvme0n1p1: 236 files, 117310/258078 clusters Sep 6 00:05:57.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.483167 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:05:57.492102 systemd[1]: Mounting boot.mount... Sep 6 00:05:57.515954 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:05:57.525165 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:05:57.526419 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:05:57.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.536391 systemd[1]: Mounted boot.mount. Sep 6 00:05:57.552056 kernel: loop1: detected capacity change from 0 to 207008 Sep 6 00:05:57.565631 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:05:57.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.576287 (sd-sysext)[1512]: Using extensions 'kubernetes'. Sep 6 00:05:57.579582 (sd-sysext)[1512]: Merged extensions into '/usr'. Sep 6 00:05:57.620733 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:05:57.626999 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:05:57.631861 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:05:57.636145 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:05:57.640871 systemd[1]: Starting modprobe@loop.service... Sep 6 00:05:57.643124 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:05:57.643457 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:05:57.650155 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:05:57.652988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:05:57.653371 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:05:57.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.656552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:05:57.656880 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:05:57.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.660464 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:05:57.660832 systemd[1]: Finished modprobe@loop.service. Sep 6 00:05:57.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.664253 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:05:57.664515 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:05:57.667702 systemd[1]: Finished systemd-sysext.service. Sep 6 00:05:57.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:57.674224 systemd[1]: Starting ensure-sysext.service... Sep 6 00:05:57.683684 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:05:57.693985 systemd[1]: Reloading. Sep 6 00:05:57.766584 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:05:57.785613 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:05:57.803013 /usr/lib/systemd/system-generators/torcx-generator[1539]: time="2025-09-06T00:05:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:05:57.814045 /usr/lib/systemd/system-generators/torcx-generator[1539]: time="2025-09-06T00:05:57Z" level=info msg="torcx already run" Sep 6 00:05:57.822684 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:05:58.073802 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:05:58.073852 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:05:58.120503 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:05:58.286000 audit: BPF prog-id=30 op=LOAD Sep 6 00:05:58.286000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:05:58.286000 audit: BPF prog-id=31 op=LOAD Sep 6 00:05:58.287000 audit: BPF prog-id=32 op=LOAD Sep 6 00:05:58.287000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:05:58.287000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:05:58.294000 audit: BPF prog-id=33 op=LOAD Sep 6 00:05:58.294000 audit: BPF prog-id=34 op=LOAD Sep 6 00:05:58.294000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:05:58.294000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:05:58.296000 audit: BPF prog-id=35 op=LOAD Sep 6 00:05:58.296000 audit: BPF prog-id=27 op=UNLOAD Sep 6 00:05:58.296000 audit: BPF prog-id=36 op=LOAD Sep 6 00:05:58.296000 audit: BPF prog-id=37 op=LOAD Sep 6 00:05:58.296000 audit: BPF prog-id=28 op=UNLOAD Sep 6 00:05:58.296000 audit: BPF prog-id=29 op=UNLOAD Sep 6 00:05:58.299000 audit: BPF prog-id=38 op=LOAD Sep 6 00:05:58.299000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:05:58.308756 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:05:58.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.327863 systemd[1]: Starting audit-rules.service... Sep 6 00:05:58.337205 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:05:58.345078 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:05:58.349000 audit: BPF prog-id=39 op=LOAD Sep 6 00:05:58.354267 systemd[1]: Starting systemd-resolved.service... Sep 6 00:05:58.359000 audit: BPF prog-id=40 op=LOAD Sep 6 00:05:58.364247 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:05:58.371496 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:05:58.386483 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:05:58.391993 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:05:58.393000 audit[1602]: SYSTEM_BOOT pid=1602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.401183 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:05:58.410179 systemd[1]: Starting modprobe@loop.service... Sep 6 00:05:58.413408 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:05:58.414126 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:05:58.418878 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:05:58.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.428539 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:05:58.428907 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:05:58.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.440255 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:05:58.440653 systemd[1]: Finished modprobe@loop.service. Sep 6 00:05:58.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.450234 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:05:58.453677 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:05:58.460451 systemd[1]: Starting modprobe@loop.service... Sep 6 00:05:58.467609 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:05:58.467974 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:05:58.468314 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:05:58.470754 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:05:58.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.475092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:05:58.475423 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:05:58.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.485204 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:05:58.485558 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:05:58.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.491462 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:05:58.491808 systemd[1]: Finished modprobe@loop.service. Sep 6 00:05:58.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.504629 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:05:58.507755 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:05:58.519760 systemd[1]: Starting modprobe@drm.service... Sep 6 00:05:58.526331 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:05:58.532905 systemd[1]: Starting modprobe@loop.service... Sep 6 00:05:58.537767 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:05:58.538151 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:05:58.538513 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:05:58.541198 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:05:58.541583 systemd[1]: Finished modprobe@drm.service. Sep 6 00:05:58.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.547015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:05:58.547383 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:05:58.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.555003 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:05:58.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.566961 systemd[1]: Finished ensure-sysext.service. Sep 6 00:05:58.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.571340 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:05:58.571743 systemd[1]: Finished modprobe@loop.service. Sep 6 00:05:58.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.577841 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:05:58.581085 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:05:58.581451 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:05:58.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:58.586786 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:05:58.648000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:05:58.648000 audit[1624]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff8584510 a2=420 a3=0 items=0 ppid=1595 pid=1624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:58.648000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:05:58.653062 augenrules[1624]: No rules Sep 6 00:05:58.654851 systemd[1]: Finished audit-rules.service. Sep 6 00:05:58.666750 ldconfig[1485]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:05:58.682055 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:05:58.687202 systemd[1]: Finished ldconfig.service. Sep 6 00:05:58.695455 systemd[1]: Reached target time-set.target. Sep 6 00:05:58.703781 systemd[1]: Starting systemd-update-done.service... Sep 6 00:05:58.723691 systemd[1]: Finished systemd-update-done.service. Sep 6 00:05:58.733774 systemd-resolved[1599]: Positive Trust Anchors: Sep 6 00:05:58.734352 systemd-resolved[1599]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:05:58.734503 systemd-resolved[1599]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:05:58.794857 systemd-resolved[1599]: Defaulting to hostname 'linux'. Sep 6 00:05:58.798132 systemd[1]: Started systemd-resolved.service. Sep 6 00:05:58.800398 systemd[1]: Reached target network.target. Sep 6 00:05:58.802309 systemd[1]: Reached target nss-lookup.target. Sep 6 00:05:58.804195 systemd[1]: Reached target sysinit.target. Sep 6 00:05:58.806239 systemd[1]: Started motdgen.path. Sep 6 00:05:58.808078 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:05:58.811019 systemd[1]: Started logrotate.timer. Sep 6 00:05:58.812945 systemd[1]: Started mdadm.timer. Sep 6 00:05:58.814561 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:05:58.816517 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:05:58.816588 systemd[1]: Reached target paths.target. Sep 6 00:05:58.818351 systemd[1]: Reached target timers.target. Sep 6 00:05:58.820766 systemd[1]: Listening on dbus.socket. Sep 6 00:05:58.824930 systemd[1]: Starting docker.socket... Sep 6 00:05:58.834182 systemd[1]: Listening on sshd.socket. Sep 6 00:05:58.836291 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:05:58.837353 systemd[1]: Listening on docker.socket. Sep 6 00:05:58.839383 systemd[1]: Reached target sockets.target. Sep 6 00:05:58.841267 systemd[1]: Reached target basic.target. Sep 6 00:05:58.842986 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:05:58.843044 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:05:58.845310 systemd[1]: Starting containerd.service... Sep 6 00:05:58.849288 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:05:58.854260 systemd[1]: Starting dbus.service... Sep 6 00:05:58.861129 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:05:58.867145 systemd[1]: Starting extend-filesystems.service... Sep 6 00:05:58.869417 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:05:58.874264 systemd[1]: Starting motdgen.service... Sep 6 00:05:58.882502 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:05:58.891062 systemd[1]: Starting sshd-keygen.service... Sep 6 00:05:58.895729 jq[1636]: false Sep 6 00:05:58.900524 systemd[1]: Starting systemd-logind.service... Sep 6 00:05:58.904204 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:05:58.904375 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:05:58.905793 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:05:58.907671 systemd[1]: Starting update-engine.service... Sep 6 00:05:58.912291 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:05:58.919390 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:05:58.921122 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:05:58.931990 systemd-timesyncd[1600]: Contacted time server 69.30.240.102:123 (0.flatcar.pool.ntp.org). Sep 6 00:05:58.932301 systemd-timesyncd[1600]: Initial clock synchronization to Sat 2025-09-06 00:05:59.119539 UTC. Sep 6 00:05:58.945489 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:05:58.946024 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:05:58.959159 jq[1643]: true Sep 6 00:05:58.985124 systemd-networkd[1367]: eth0: Gained IPv6LL Sep 6 00:05:58.990136 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:05:58.992634 systemd[1]: Reached target network-online.target. Sep 6 00:05:58.997263 systemd[1]: Started amazon-ssm-agent.service. Sep 6 00:05:59.004080 systemd[1]: Starting kubelet.service... Sep 6 00:05:59.010875 systemd[1]: Started nvidia.service. Sep 6 00:05:59.054687 jq[1656]: true Sep 6 00:05:59.066167 dbus-daemon[1635]: [system] SELinux support is enabled Sep 6 00:05:59.091101 systemd[1]: Started dbus.service. Sep 6 00:05:59.100604 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:05:59.101104 systemd[1]: Finished motdgen.service. Sep 6 00:05:59.112525 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:05:59.112608 systemd[1]: Reached target system-config.target. Sep 6 00:05:59.115090 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:05:59.115155 systemd[1]: Reached target user-config.target. Sep 6 00:05:59.152389 dbus-daemon[1635]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1367 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 6 00:05:59.159060 dbus-daemon[1635]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 6 00:05:59.159803 extend-filesystems[1637]: Found loop1 Sep 6 00:05:59.178058 extend-filesystems[1637]: Found nvme0n1 Sep 6 00:05:59.188259 extend-filesystems[1637]: Found nvme0n1p1 Sep 6 00:05:59.197898 extend-filesystems[1637]: Found nvme0n1p2 Sep 6 00:05:59.204065 systemd[1]: Starting systemd-hostnamed.service... Sep 6 00:05:59.206131 extend-filesystems[1637]: Found nvme0n1p3 Sep 6 00:05:59.208716 extend-filesystems[1637]: Found usr Sep 6 00:05:59.217840 extend-filesystems[1637]: Found nvme0n1p4 Sep 6 00:05:59.232108 extend-filesystems[1637]: Found nvme0n1p6 Sep 6 00:05:59.234277 extend-filesystems[1637]: Found nvme0n1p7 Sep 6 00:05:59.242213 extend-filesystems[1637]: Found nvme0n1p9 Sep 6 00:05:59.250074 extend-filesystems[1637]: Checking size of /dev/nvme0n1p9 Sep 6 00:05:59.347335 extend-filesystems[1637]: Resized partition /dev/nvme0n1p9 Sep 6 00:05:59.368431 extend-filesystems[1694]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:05:59.414730 bash[1690]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:05:59.418453 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:05:59.437176 update_engine[1642]: I0906 00:05:59.436515 1642 main.cc:92] Flatcar Update Engine starting Sep 6 00:05:59.445982 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 6 00:05:59.464983 update_engine[1642]: I0906 00:05:59.464887 1642 update_check_scheduler.cc:74] Next update check in 6m36s Sep 6 00:05:59.502396 amazon-ssm-agent[1659]: 2025/09/06 00:05:59 Failed to load instance info from vault. RegistrationKey does not exist. Sep 6 00:05:59.506198 systemd[1]: Started update-engine.service. Sep 6 00:05:59.511676 systemd[1]: Started locksmithd.service. Sep 6 00:05:59.524950 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 6 00:05:59.542393 amazon-ssm-agent[1659]: Initializing new seelog logger Sep 6 00:05:59.542669 amazon-ssm-agent[1659]: New Seelog Logger Creation Complete Sep 6 00:05:59.542811 amazon-ssm-agent[1659]: 2025/09/06 00:05:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 6 00:05:59.542811 amazon-ssm-agent[1659]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 6 00:05:59.543361 amazon-ssm-agent[1659]: 2025/09/06 00:05:59 processing appconfig overrides Sep 6 00:05:59.547216 extend-filesystems[1694]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 6 00:05:59.547216 extend-filesystems[1694]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:05:59.547216 extend-filesystems[1694]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 6 00:05:59.558185 extend-filesystems[1637]: Resized filesystem in /dev/nvme0n1p9 Sep 6 00:05:59.561285 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:05:59.561727 systemd[1]: Finished extend-filesystems.service. Sep 6 00:05:59.612326 systemd-logind[1641]: Watching system buttons on /dev/input/event0 (Power Button) Sep 6 00:05:59.615092 systemd-logind[1641]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 6 00:05:59.624116 systemd-logind[1641]: New seat seat0. Sep 6 00:05:59.630996 systemd[1]: Started systemd-logind.service. Sep 6 00:05:59.704167 env[1647]: time="2025-09-06T00:05:59.704067577Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:05:59.716077 systemd[1]: nvidia.service: Deactivated successfully. Sep 6 00:05:59.823683 dbus-daemon[1635]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 6 00:05:59.824043 systemd[1]: Started systemd-hostnamed.service. Sep 6 00:05:59.827838 dbus-daemon[1635]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1678 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 6 00:05:59.833938 systemd[1]: Starting polkit.service... Sep 6 00:05:59.906976 polkitd[1725]: Started polkitd version 121 Sep 6 00:05:59.948840 polkitd[1725]: Loading rules from directory /etc/polkit-1/rules.d Sep 6 00:05:59.951505 env[1647]: time="2025-09-06T00:05:59.951417037Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:05:59.951873 env[1647]: time="2025-09-06T00:05:59.951722262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:05:59.953552 polkitd[1725]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 6 00:05:59.959395 polkitd[1725]: Finished loading, compiling and executing 2 rules Sep 6 00:05:59.960474 dbus-daemon[1635]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 6 00:05:59.960783 systemd[1]: Started polkit.service. Sep 6 00:05:59.963647 polkitd[1725]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 6 00:05:59.971790 env[1647]: time="2025-09-06T00:05:59.971654080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:05:59.972007 env[1647]: time="2025-09-06T00:05:59.971793686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:05:59.972581 env[1647]: time="2025-09-06T00:05:59.972470901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:05:59.972581 env[1647]: time="2025-09-06T00:05:59.972569069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:05:59.972789 env[1647]: time="2025-09-06T00:05:59.972637454Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:05:59.972789 env[1647]: time="2025-09-06T00:05:59.972669104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:05:59.973132 env[1647]: time="2025-09-06T00:05:59.973044397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:05:59.974351 env[1647]: time="2025-09-06T00:05:59.974244716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:05:59.974968 env[1647]: time="2025-09-06T00:05:59.974788503Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:05:59.974968 env[1647]: time="2025-09-06T00:05:59.974958580Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:05:59.975347 env[1647]: time="2025-09-06T00:05:59.975236922Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:05:59.975506 env[1647]: time="2025-09-06T00:05:59.975317539Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:05:59.989106 env[1647]: time="2025-09-06T00:05:59.989017625Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:05:59.989285 env[1647]: time="2025-09-06T00:05:59.989107982Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:05:59.989285 env[1647]: time="2025-09-06T00:05:59.989149457Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:05:59.989285 env[1647]: time="2025-09-06T00:05:59.989262707Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:05:59.989520 env[1647]: time="2025-09-06T00:05:59.989307744Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:05:59.989520 env[1647]: time="2025-09-06T00:05:59.989345351Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:05:59.989520 env[1647]: time="2025-09-06T00:05:59.989379347Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:05:59.990102 env[1647]: time="2025-09-06T00:05:59.990002584Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:05:59.990102 env[1647]: time="2025-09-06T00:05:59.990086026Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:05:59.990310 env[1647]: time="2025-09-06T00:05:59.990127870Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:05:59.990310 env[1647]: time="2025-09-06T00:05:59.990168658Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:05:59.990310 env[1647]: time="2025-09-06T00:05:59.990203673Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:05:59.990507 env[1647]: time="2025-09-06T00:05:59.990470273Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:05:59.990825 env[1647]: time="2025-09-06T00:05:59.990754460Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:05:59.991347 env[1647]: time="2025-09-06T00:05:59.991271214Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:05:59.991556 env[1647]: time="2025-09-06T00:05:59.991367847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:05:59.991556 env[1647]: time="2025-09-06T00:05:59.991407701Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:05:59.994665 env[1647]: time="2025-09-06T00:05:59.993308804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:05:59.994823 env[1647]: time="2025-09-06T00:05:59.994690758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:05:59.994823 env[1647]: time="2025-09-06T00:05:59.994748752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:05:59.994823 env[1647]: time="2025-09-06T00:05:59.994784775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:05:59.995019 env[1647]: time="2025-09-06T00:05:59.994820232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:05:59.995019 env[1647]: time="2025-09-06T00:05:59.994854044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:05:59.995019 env[1647]: time="2025-09-06T00:05:59.994885448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:05:59.995019 env[1647]: time="2025-09-06T00:05:59.994956129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:05:59.995294 env[1647]: time="2025-09-06T00:05:59.994999202Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:05:59.995591 env[1647]: time="2025-09-06T00:05:59.995508046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:05:59.995591 env[1647]: time="2025-09-06T00:05:59.995584402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:05:59.995774 env[1647]: time="2025-09-06T00:05:59.995622819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:05:59.995774 env[1647]: time="2025-09-06T00:05:59.995659640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:05:59.995774 env[1647]: time="2025-09-06T00:05:59.995698352Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:05:59.995774 env[1647]: time="2025-09-06T00:05:59.995731869Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:05:59.996067 env[1647]: time="2025-09-06T00:05:59.995778466Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:05:59.996067 env[1647]: time="2025-09-06T00:05:59.995873526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:06:00.002133 env[1647]: time="2025-09-06T00:06:00.001964667Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:06:00.002133 env[1647]: time="2025-09-06T00:06:00.002120834Z" level=info msg="Connect containerd service" Sep 6 00:06:00.004503 env[1647]: time="2025-09-06T00:06:00.004232674Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:06:00.005791 env[1647]: time="2025-09-06T00:06:00.005690843Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:06:00.013111 env[1647]: time="2025-09-06T00:06:00.006457928Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:06:00.013111 env[1647]: time="2025-09-06T00:06:00.006601813Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:06:00.006841 systemd[1]: Started containerd.service. Sep 6 00:06:00.015966 env[1647]: time="2025-09-06T00:06:00.014776661Z" level=info msg="containerd successfully booted in 0.319763s" Sep 6 00:06:00.024219 systemd-hostnamed[1678]: Hostname set to (transient) Sep 6 00:06:00.024418 systemd-resolved[1599]: System hostname changed to 'ip-172-31-27-196'. Sep 6 00:06:00.025415 env[1647]: time="2025-09-06T00:06:00.025018099Z" level=info msg="Start subscribing containerd event" Sep 6 00:06:00.025415 env[1647]: time="2025-09-06T00:06:00.025134600Z" level=info msg="Start recovering state" Sep 6 00:06:00.025415 env[1647]: time="2025-09-06T00:06:00.025264707Z" level=info msg="Start event monitor" Sep 6 00:06:00.025415 env[1647]: time="2025-09-06T00:06:00.025314991Z" level=info msg="Start snapshots syncer" Sep 6 00:06:00.025415 env[1647]: time="2025-09-06T00:06:00.025341848Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:06:00.025415 env[1647]: time="2025-09-06T00:06:00.025363071Z" level=info msg="Start streaming server" Sep 6 00:06:00.251583 coreos-metadata[1634]: Sep 06 00:06:00.249 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 6 00:06:00.254604 coreos-metadata[1634]: Sep 06 00:06:00.254 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 6 00:06:00.255432 coreos-metadata[1634]: Sep 06 00:06:00.255 INFO Fetch successful Sep 6 00:06:00.256161 coreos-metadata[1634]: Sep 06 00:06:00.255 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 6 00:06:00.256789 coreos-metadata[1634]: Sep 06 00:06:00.256 INFO Fetch successful Sep 6 00:06:00.260343 unknown[1634]: wrote ssh authorized keys file for user: core Sep 6 00:06:00.290523 update-ssh-keys[1778]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:06:00.291764 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:06:00.330261 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO Create new startup processor Sep 6 00:06:00.335141 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [LongRunningPluginsManager] registered plugins: {} Sep 6 00:06:00.339851 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO Initializing bookkeeping folders Sep 6 00:06:00.340118 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO removing the completed state files Sep 6 00:06:00.340269 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO Initializing bookkeeping folders for long running plugins Sep 6 00:06:00.340454 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 6 00:06:00.340611 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO Initializing healthcheck folders for long running plugins Sep 6 00:06:00.341040 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO Initializing locations for inventory plugin Sep 6 00:06:00.341198 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO Initializing default location for custom inventory Sep 6 00:06:00.341365 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO Initializing default location for file inventory Sep 6 00:06:00.342398 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO Initializing default location for role inventory Sep 6 00:06:00.342591 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO Init the cloudwatchlogs publisher Sep 6 00:06:00.342742 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [instanceID=i-00c25f3cacc0bc6ed] Successfully loaded platform independent plugin aws:softwareInventory Sep 6 00:06:00.342994 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [instanceID=i-00c25f3cacc0bc6ed] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 6 00:06:00.343153 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [instanceID=i-00c25f3cacc0bc6ed] Successfully loaded platform independent plugin aws:configureDocker Sep 6 00:06:00.343337 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [instanceID=i-00c25f3cacc0bc6ed] Successfully loaded platform independent plugin aws:runDockerAction Sep 6 00:06:00.343502 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [instanceID=i-00c25f3cacc0bc6ed] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 6 00:06:00.343661 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [instanceID=i-00c25f3cacc0bc6ed] Successfully loaded platform independent plugin aws:refreshAssociation Sep 6 00:06:00.343819 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [instanceID=i-00c25f3cacc0bc6ed] Successfully loaded platform independent plugin aws:configurePackage Sep 6 00:06:00.344046 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [instanceID=i-00c25f3cacc0bc6ed] Successfully loaded platform independent plugin aws:downloadContent Sep 6 00:06:00.345089 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [instanceID=i-00c25f3cacc0bc6ed] Successfully loaded platform independent plugin aws:runDocument Sep 6 00:06:00.345325 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [instanceID=i-00c25f3cacc0bc6ed] Successfully loaded platform dependent plugin aws:runShellScript Sep 6 00:06:00.348099 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 6 00:06:00.348328 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO OS: linux, Arch: arm64 Sep 6 00:06:00.357388 amazon-ssm-agent[1659]: datastore file /var/lib/amazon/ssm/i-00c25f3cacc0bc6ed/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 6 00:06:00.434769 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessagingDeliveryService] Starting document processing engine... Sep 6 00:06:00.531138 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 6 00:06:00.625373 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 6 00:06:00.720124 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessageGatewayService] Starting session document processing engine... Sep 6 00:06:00.815017 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 6 00:06:00.831686 locksmithd[1708]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:06:00.909834 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 6 00:06:01.005010 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-00c25f3cacc0bc6ed, requestId: d6a22fd1-d763-4492-869f-f6f8ff45ce49 Sep 6 00:06:01.100599 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [HealthCheck] HealthCheck reporting agent health. Sep 6 00:06:01.196284 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [OfflineService] Starting document processing engine... Sep 6 00:06:01.291992 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [OfflineService] [EngineProcessor] Starting Sep 6 00:06:01.388022 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [OfflineService] [EngineProcessor] Initial processing Sep 6 00:06:01.484174 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [OfflineService] Starting message polling Sep 6 00:06:01.580571 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [OfflineService] Starting send replies to MDS Sep 6 00:06:01.677151 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessagingDeliveryService] Starting message polling Sep 6 00:06:01.773814 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 6 00:06:01.870819 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [instanceID=i-00c25f3cacc0bc6ed] Starting association polling Sep 6 00:06:01.968065 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 6 00:06:02.065250 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 6 00:06:02.127352 systemd[1]: Started kubelet.service. Sep 6 00:06:02.162741 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 6 00:06:02.260639 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 6 00:06:02.345284 sshd_keygen[1665]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:06:02.358540 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 6 00:06:02.394476 systemd[1]: Finished sshd-keygen.service. Sep 6 00:06:02.399772 systemd[1]: Starting issuegen.service... Sep 6 00:06:02.411574 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:06:02.411995 systemd[1]: Finished issuegen.service. Sep 6 00:06:02.416794 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:06:02.432660 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:06:02.438265 systemd[1]: Started getty@tty1.service. Sep 6 00:06:02.443768 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:06:02.446274 systemd[1]: Reached target getty.target. Sep 6 00:06:02.448395 systemd[1]: Reached target multi-user.target. Sep 6 00:06:02.453848 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:06:02.457726 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 6 00:06:02.472802 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:06:02.473261 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:06:02.475804 systemd[1]: Startup finished in 1.208s (kernel) + 8.338s (initrd) + 12.535s (userspace) = 22.081s. Sep 6 00:06:02.556137 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 6 00:06:02.654635 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessageGatewayService] listening reply. Sep 6 00:06:02.753410 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 6 00:06:02.852460 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [StartupProcessor] Executing startup processor tasks Sep 6 00:06:02.951569 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 6 00:06:03.050953 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 6 00:06:03.150575 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 6 00:06:03.250136 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-00c25f3cacc0bc6ed?role=subscribe&stream=input Sep 6 00:06:03.350166 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-00c25f3cacc0bc6ed?role=subscribe&stream=input Sep 6 00:06:03.410070 kubelet[1844]: E0906 00:06:03.409934 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:06:03.413849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:06:03.414221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:06:03.414672 systemd[1]: kubelet.service: Consumed 1.620s CPU time. Sep 6 00:06:03.450248 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessageGatewayService] Starting receiving message from control channel Sep 6 00:06:03.551210 amazon-ssm-agent[1659]: 2025-09-06 00:06:00 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 6 00:06:08.195646 systemd[1]: Created slice system-sshd.slice. Sep 6 00:06:08.198276 systemd[1]: Started sshd@0-172.31.27.196:22-147.75.109.163:32826.service. Sep 6 00:06:08.484732 sshd[1865]: Accepted publickey for core from 147.75.109.163 port 32826 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:08.490284 sshd[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:08.513540 systemd[1]: Created slice user-500.slice. Sep 6 00:06:08.516994 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:06:08.528735 systemd-logind[1641]: New session 1 of user core. Sep 6 00:06:08.541067 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:06:08.546157 systemd[1]: Starting user@500.service... Sep 6 00:06:08.555521 (systemd)[1868]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:08.745317 systemd[1868]: Queued start job for default target default.target. Sep 6 00:06:08.746962 systemd[1868]: Reached target paths.target. Sep 6 00:06:08.747206 systemd[1868]: Reached target sockets.target. Sep 6 00:06:08.747382 systemd[1868]: Reached target timers.target. Sep 6 00:06:08.747752 systemd[1868]: Reached target basic.target. Sep 6 00:06:08.748072 systemd[1868]: Reached target default.target. Sep 6 00:06:08.748161 systemd[1]: Started user@500.service. Sep 6 00:06:08.748382 systemd[1868]: Startup finished in 179ms. Sep 6 00:06:08.753071 systemd[1]: Started session-1.scope. Sep 6 00:06:08.912117 systemd[1]: Started sshd@1-172.31.27.196:22-147.75.109.163:32836.service. Sep 6 00:06:09.084273 sshd[1877]: Accepted publickey for core from 147.75.109.163 port 32836 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:09.086847 sshd[1877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:09.094558 systemd-logind[1641]: New session 2 of user core. Sep 6 00:06:09.096690 systemd[1]: Started session-2.scope. Sep 6 00:06:09.231192 sshd[1877]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:09.236579 systemd-logind[1641]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:06:09.238117 systemd[1]: sshd@1-172.31.27.196:22-147.75.109.163:32836.service: Deactivated successfully. Sep 6 00:06:09.239383 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:06:09.240875 systemd-logind[1641]: Removed session 2. Sep 6 00:06:09.258213 systemd[1]: Started sshd@2-172.31.27.196:22-147.75.109.163:32850.service. Sep 6 00:06:09.425697 sshd[1883]: Accepted publickey for core from 147.75.109.163 port 32850 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:09.428720 sshd[1883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:09.437445 systemd[1]: Started session-3.scope. Sep 6 00:06:09.438734 systemd-logind[1641]: New session 3 of user core. Sep 6 00:06:09.560394 sshd[1883]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:09.566633 systemd[1]: sshd@2-172.31.27.196:22-147.75.109.163:32850.service: Deactivated successfully. Sep 6 00:06:09.568004 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:06:09.569432 systemd-logind[1641]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:06:09.572388 systemd-logind[1641]: Removed session 3. Sep 6 00:06:09.589416 systemd[1]: Started sshd@3-172.31.27.196:22-147.75.109.163:32858.service. Sep 6 00:06:09.766942 sshd[1889]: Accepted publickey for core from 147.75.109.163 port 32858 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:09.769650 sshd[1889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:09.779000 systemd[1]: Started session-4.scope. Sep 6 00:06:09.779785 systemd-logind[1641]: New session 4 of user core. Sep 6 00:06:09.911493 sshd[1889]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:09.917240 systemd[1]: sshd@3-172.31.27.196:22-147.75.109.163:32858.service: Deactivated successfully. Sep 6 00:06:09.918298 systemd-logind[1641]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:06:09.918476 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:06:09.920506 systemd-logind[1641]: Removed session 4. Sep 6 00:06:09.939533 systemd[1]: Started sshd@4-172.31.27.196:22-147.75.109.163:52984.service. Sep 6 00:06:10.107532 sshd[1895]: Accepted publickey for core from 147.75.109.163 port 52984 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:10.110571 sshd[1895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:10.119036 systemd-logind[1641]: New session 5 of user core. Sep 6 00:06:10.119747 systemd[1]: Started session-5.scope. Sep 6 00:06:10.321557 sudo[1898]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:06:10.322136 sudo[1898]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:06:10.346743 systemd[1]: Starting coreos-metadata.service... Sep 6 00:06:10.518044 coreos-metadata[1902]: Sep 06 00:06:10.517 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 6 00:06:10.519545 coreos-metadata[1902]: Sep 06 00:06:10.519 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Sep 6 00:06:10.520382 coreos-metadata[1902]: Sep 06 00:06:10.520 INFO Fetch successful Sep 6 00:06:10.520739 coreos-metadata[1902]: Sep 06 00:06:10.520 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Sep 6 00:06:10.521187 coreos-metadata[1902]: Sep 06 00:06:10.520 INFO Fetch successful Sep 6 00:06:10.522204 coreos-metadata[1902]: Sep 06 00:06:10.521 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Sep 6 00:06:10.523008 coreos-metadata[1902]: Sep 06 00:06:10.522 INFO Fetch successful Sep 6 00:06:10.523282 coreos-metadata[1902]: Sep 06 00:06:10.523 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Sep 6 00:06:10.523617 coreos-metadata[1902]: Sep 06 00:06:10.523 INFO Fetch successful Sep 6 00:06:10.523956 coreos-metadata[1902]: Sep 06 00:06:10.523 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Sep 6 00:06:10.524313 coreos-metadata[1902]: Sep 06 00:06:10.524 INFO Fetch successful Sep 6 00:06:10.524577 coreos-metadata[1902]: Sep 06 00:06:10.524 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Sep 6 00:06:10.525034 coreos-metadata[1902]: Sep 06 00:06:10.524 INFO Fetch successful Sep 6 00:06:10.525298 coreos-metadata[1902]: Sep 06 00:06:10.525 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Sep 6 00:06:10.525720 coreos-metadata[1902]: Sep 06 00:06:10.525 INFO Fetch successful Sep 6 00:06:10.526026 coreos-metadata[1902]: Sep 06 00:06:10.525 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Sep 6 00:06:10.526302 coreos-metadata[1902]: Sep 06 00:06:10.526 INFO Fetch successful Sep 6 00:06:10.540879 systemd[1]: Finished coreos-metadata.service. Sep 6 00:06:11.602884 systemd[1]: Stopped kubelet.service. Sep 6 00:06:11.604731 systemd[1]: kubelet.service: Consumed 1.620s CPU time. Sep 6 00:06:11.610875 systemd[1]: Starting kubelet.service... Sep 6 00:06:11.678048 systemd[1]: Reloading. Sep 6 00:06:11.911007 /usr/lib/systemd/system-generators/torcx-generator[1961]: time="2025-09-06T00:06:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:06:11.915064 /usr/lib/systemd/system-generators/torcx-generator[1961]: time="2025-09-06T00:06:11Z" level=info msg="torcx already run" Sep 6 00:06:12.127517 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:06:12.127567 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:06:12.173621 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:06:12.422491 systemd[1]: Started kubelet.service. Sep 6 00:06:12.428283 systemd[1]: Stopping kubelet.service... Sep 6 00:06:12.429824 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:06:12.430581 systemd[1]: Stopped kubelet.service. Sep 6 00:06:12.435325 systemd[1]: Starting kubelet.service... Sep 6 00:06:12.558031 amazon-ssm-agent[1659]: 2025-09-06 00:06:12 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 6 00:06:12.767591 systemd[1]: Started kubelet.service. Sep 6 00:06:12.864524 kubelet[2018]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:06:12.864524 kubelet[2018]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:06:12.864524 kubelet[2018]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:06:12.865225 kubelet[2018]: I0906 00:06:12.864683 2018 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:06:14.537663 kubelet[2018]: I0906 00:06:14.537602 2018 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 00:06:14.538526 kubelet[2018]: I0906 00:06:14.538467 2018 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:06:14.539065 kubelet[2018]: I0906 00:06:14.539018 2018 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 00:06:14.602996 kubelet[2018]: I0906 00:06:14.602910 2018 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:06:14.617425 kubelet[2018]: E0906 00:06:14.617372 2018 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:06:14.617720 kubelet[2018]: I0906 00:06:14.617696 2018 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:06:14.623471 kubelet[2018]: I0906 00:06:14.623431 2018 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:06:14.624165 kubelet[2018]: I0906 00:06:14.624111 2018 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:06:14.624621 kubelet[2018]: I0906 00:06:14.624333 2018 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.27.196","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:06:14.625087 kubelet[2018]: I0906 00:06:14.625054 2018 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:06:14.625253 kubelet[2018]: I0906 00:06:14.625230 2018 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 00:06:14.625728 kubelet[2018]: I0906 00:06:14.625696 2018 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:06:14.651615 kubelet[2018]: I0906 00:06:14.651569 2018 kubelet.go:446] "Attempting to sync node with API server" Sep 6 00:06:14.651840 kubelet[2018]: I0906 00:06:14.651812 2018 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:06:14.652037 kubelet[2018]: I0906 00:06:14.652014 2018 kubelet.go:352] "Adding apiserver pod source" Sep 6 00:06:14.652162 kubelet[2018]: I0906 00:06:14.652140 2018 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:06:14.652527 kubelet[2018]: E0906 00:06:14.652415 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:14.658830 kubelet[2018]: E0906 00:06:14.658710 2018 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:14.662185 kubelet[2018]: I0906 00:06:14.662145 2018 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:06:14.663698 kubelet[2018]: I0906 00:06:14.663658 2018 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:06:14.664136 kubelet[2018]: W0906 00:06:14.664108 2018 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:06:14.666027 kubelet[2018]: I0906 00:06:14.665989 2018 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:06:14.666257 kubelet[2018]: I0906 00:06:14.666234 2018 server.go:1287] "Started kubelet" Sep 6 00:06:14.666943 kubelet[2018]: I0906 00:06:14.666851 2018 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:06:14.668812 kubelet[2018]: I0906 00:06:14.668748 2018 server.go:479] "Adding debug handlers to kubelet server" Sep 6 00:06:14.682842 kubelet[2018]: I0906 00:06:14.682734 2018 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:06:14.683559 kubelet[2018]: I0906 00:06:14.683522 2018 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:06:14.687934 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:06:14.688396 kubelet[2018]: I0906 00:06:14.688330 2018 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:06:14.690570 kubelet[2018]: I0906 00:06:14.690487 2018 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:06:14.692638 kubelet[2018]: E0906 00:06:14.692157 2018 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.27.196.186288c2c0b78b10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.27.196,UID:172.31.27.196,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.27.196,},FirstTimestamp:2025-09-06 00:06:14.666201872 +0000 UTC m=+1.889571661,LastTimestamp:2025-09-06 00:06:14.666201872 +0000 UTC m=+1.889571661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.27.196,}" Sep 6 00:06:14.710244 kubelet[2018]: I0906 00:06:14.706387 2018 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:06:14.718640 kubelet[2018]: I0906 00:06:14.706421 2018 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:06:14.718640 kubelet[2018]: E0906 00:06:14.706629 2018 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.196\" not found" Sep 6 00:06:14.718640 kubelet[2018]: I0906 00:06:14.711451 2018 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:06:14.719149 kubelet[2018]: I0906 00:06:14.719041 2018 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:06:14.720359 kubelet[2018]: I0906 00:06:14.720307 2018 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:06:14.724648 kubelet[2018]: E0906 00:06:14.724588 2018 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.27.196\" not found" node="172.31.27.196" Sep 6 00:06:14.728553 kubelet[2018]: E0906 00:06:14.728514 2018 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:06:14.729829 kubelet[2018]: I0906 00:06:14.729782 2018 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:06:14.767509 kubelet[2018]: I0906 00:06:14.767475 2018 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:06:14.767746 kubelet[2018]: I0906 00:06:14.767720 2018 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:06:14.767873 kubelet[2018]: I0906 00:06:14.767853 2018 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:06:14.772646 kubelet[2018]: I0906 00:06:14.772610 2018 policy_none.go:49] "None policy: Start" Sep 6 00:06:14.772836 kubelet[2018]: I0906 00:06:14.772813 2018 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:06:14.775129 kubelet[2018]: I0906 00:06:14.775099 2018 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:06:14.785383 systemd[1]: Created slice kubepods.slice. Sep 6 00:06:14.806434 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:06:14.824069 kubelet[2018]: E0906 00:06:14.824030 2018 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.196\" not found" Sep 6 00:06:14.830174 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:06:14.843670 kubelet[2018]: I0906 00:06:14.843614 2018 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:06:14.844072 kubelet[2018]: I0906 00:06:14.843926 2018 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:06:14.844072 kubelet[2018]: I0906 00:06:14.843964 2018 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:06:14.847048 kubelet[2018]: I0906 00:06:14.845998 2018 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:06:14.849211 kubelet[2018]: E0906 00:06:14.849158 2018 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:06:14.849392 kubelet[2018]: E0906 00:06:14.849231 2018 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.27.196\" not found" Sep 6 00:06:14.949959 kubelet[2018]: I0906 00:06:14.949920 2018 kubelet_node_status.go:75] "Attempting to register node" node="172.31.27.196" Sep 6 00:06:14.958273 kubelet[2018]: I0906 00:06:14.958216 2018 kubelet_node_status.go:78] "Successfully registered node" node="172.31.27.196" Sep 6 00:06:14.978550 kubelet[2018]: I0906 00:06:14.978513 2018 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 6 00:06:14.979686 env[1647]: time="2025-09-06T00:06:14.979493708Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:06:14.980656 kubelet[2018]: I0906 00:06:14.980625 2018 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 6 00:06:15.004758 kubelet[2018]: I0906 00:06:15.004694 2018 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:06:15.007553 kubelet[2018]: I0906 00:06:15.007503 2018 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:06:15.007553 kubelet[2018]: I0906 00:06:15.007552 2018 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 00:06:15.007807 kubelet[2018]: I0906 00:06:15.007585 2018 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:06:15.007807 kubelet[2018]: I0906 00:06:15.007600 2018 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 00:06:15.007807 kubelet[2018]: E0906 00:06:15.007671 2018 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 6 00:06:15.325392 sudo[1898]: pam_unix(sudo:session): session closed for user root Sep 6 00:06:15.349488 sshd[1895]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:15.354743 systemd-logind[1641]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:06:15.355190 systemd[1]: sshd@4-172.31.27.196:22-147.75.109.163:52984.service: Deactivated successfully. Sep 6 00:06:15.356459 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:06:15.358087 systemd-logind[1641]: Removed session 5. Sep 6 00:06:15.542268 kubelet[2018]: I0906 00:06:15.542179 2018 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 6 00:06:15.542838 kubelet[2018]: W0906 00:06:15.542459 2018 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 00:06:15.543089 kubelet[2018]: W0906 00:06:15.543052 2018 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 00:06:15.543189 kubelet[2018]: W0906 00:06:15.543124 2018 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 00:06:15.653116 kubelet[2018]: E0906 00:06:15.652955 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:15.661336 kubelet[2018]: I0906 00:06:15.661273 2018 apiserver.go:52] "Watching apiserver" Sep 6 00:06:15.680286 systemd[1]: Created slice kubepods-besteffort-pod8948c21a_ed4c_4533_9638_45493450d411.slice. Sep 6 00:06:15.707797 systemd[1]: Created slice kubepods-burstable-pode9d915df_2c5a_45af_a3c3_7cbbd759f718.slice. Sep 6 00:06:15.720936 kubelet[2018]: I0906 00:06:15.720826 2018 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:06:15.729091 kubelet[2018]: I0906 00:06:15.729028 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-bpf-maps\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.729253 kubelet[2018]: I0906 00:06:15.729098 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cni-path\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.729253 kubelet[2018]: I0906 00:06:15.729146 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9d915df-2c5a-45af-a3c3-7cbbd759f718-clustermesh-secrets\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.729253 kubelet[2018]: I0906 00:06:15.729185 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-host-proc-sys-net\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.729253 kubelet[2018]: I0906 00:06:15.729228 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-host-proc-sys-kernel\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.729548 kubelet[2018]: I0906 00:06:15.729264 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8948c21a-ed4c-4533-9638-45493450d411-kube-proxy\") pod \"kube-proxy-tnkh5\" (UID: \"8948c21a-ed4c-4533-9638-45493450d411\") " pod="kube-system/kube-proxy-tnkh5" Sep 6 00:06:15.729548 kubelet[2018]: I0906 00:06:15.729301 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cilium-cgroup\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.729548 kubelet[2018]: I0906 00:06:15.729335 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-etc-cni-netd\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.729548 kubelet[2018]: I0906 00:06:15.729377 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-xtables-lock\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.729548 kubelet[2018]: I0906 00:06:15.729416 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8948c21a-ed4c-4533-9638-45493450d411-lib-modules\") pod \"kube-proxy-tnkh5\" (UID: \"8948c21a-ed4c-4533-9638-45493450d411\") " pod="kube-system/kube-proxy-tnkh5" Sep 6 00:06:15.729548 kubelet[2018]: I0906 00:06:15.729451 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-hostproc\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.729941 kubelet[2018]: I0906 00:06:15.729487 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9d915df-2c5a-45af-a3c3-7cbbd759f718-hubble-tls\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.729941 kubelet[2018]: I0906 00:06:15.729523 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz8nt\" (UniqueName: \"kubernetes.io/projected/e9d915df-2c5a-45af-a3c3-7cbbd759f718-kube-api-access-vz8nt\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.729941 kubelet[2018]: I0906 00:06:15.729563 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zmpb\" (UniqueName: \"kubernetes.io/projected/8948c21a-ed4c-4533-9638-45493450d411-kube-api-access-6zmpb\") pod \"kube-proxy-tnkh5\" (UID: \"8948c21a-ed4c-4533-9638-45493450d411\") " pod="kube-system/kube-proxy-tnkh5" Sep 6 00:06:15.729941 kubelet[2018]: I0906 00:06:15.729599 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cilium-run\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.729941 kubelet[2018]: I0906 00:06:15.729634 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-lib-modules\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.730248 kubelet[2018]: I0906 00:06:15.729670 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cilium-config-path\") pod \"cilium-zqcc9\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " pod="kube-system/cilium-zqcc9" Sep 6 00:06:15.730248 kubelet[2018]: I0906 00:06:15.729709 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8948c21a-ed4c-4533-9638-45493450d411-xtables-lock\") pod \"kube-proxy-tnkh5\" (UID: \"8948c21a-ed4c-4533-9638-45493450d411\") " pod="kube-system/kube-proxy-tnkh5" Sep 6 00:06:15.831486 kubelet[2018]: I0906 00:06:15.831403 2018 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:06:16.003248 env[1647]: time="2025-09-06T00:06:16.003065703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tnkh5,Uid:8948c21a-ed4c-4533-9638-45493450d411,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:16.020863 env[1647]: time="2025-09-06T00:06:16.020204353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zqcc9,Uid:e9d915df-2c5a-45af-a3c3-7cbbd759f718,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:16.653391 kubelet[2018]: E0906 00:06:16.653302 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:16.666972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2168234202.mount: Deactivated successfully. Sep 6 00:06:16.674990 env[1647]: time="2025-09-06T00:06:16.674867646Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:16.676931 env[1647]: time="2025-09-06T00:06:16.676827440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:16.685300 env[1647]: time="2025-09-06T00:06:16.685203824Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:16.687691 env[1647]: time="2025-09-06T00:06:16.687583121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:16.689501 env[1647]: time="2025-09-06T00:06:16.689423202Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:16.695136 env[1647]: time="2025-09-06T00:06:16.695070133Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:16.697362 env[1647]: time="2025-09-06T00:06:16.697296613Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:16.705353 env[1647]: time="2025-09-06T00:06:16.705250715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:16.758076 env[1647]: time="2025-09-06T00:06:16.757926858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:16.758291 env[1647]: time="2025-09-06T00:06:16.758056808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:16.758291 env[1647]: time="2025-09-06T00:06:16.758154773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:16.758561 env[1647]: time="2025-09-06T00:06:16.758249923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:16.759035 env[1647]: time="2025-09-06T00:06:16.758877542Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4991e9fe7e7ccff3cda75a81de1e778d9df4b94ae9382102602fd1ba48ff04b7 pid=2078 runtime=io.containerd.runc.v2 Sep 6 00:06:16.759430 env[1647]: time="2025-09-06T00:06:16.758748987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:16.759798 env[1647]: time="2025-09-06T00:06:16.759680954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:16.760864 env[1647]: time="2025-09-06T00:06:16.760675256Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b pid=2080 runtime=io.containerd.runc.v2 Sep 6 00:06:16.800428 systemd[1]: Started cri-containerd-49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b.scope. Sep 6 00:06:16.820220 systemd[1]: Started cri-containerd-4991e9fe7e7ccff3cda75a81de1e778d9df4b94ae9382102602fd1ba48ff04b7.scope. Sep 6 00:06:16.902808 env[1647]: time="2025-09-06T00:06:16.900777380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zqcc9,Uid:e9d915df-2c5a-45af-a3c3-7cbbd759f718,Namespace:kube-system,Attempt:0,} returns sandbox id \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\"" Sep 6 00:06:16.905084 env[1647]: time="2025-09-06T00:06:16.904786934Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:06:16.930049 env[1647]: time="2025-09-06T00:06:16.929575830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tnkh5,Uid:8948c21a-ed4c-4533-9638-45493450d411,Namespace:kube-system,Attempt:0,} returns sandbox id \"4991e9fe7e7ccff3cda75a81de1e778d9df4b94ae9382102602fd1ba48ff04b7\"" Sep 6 00:06:17.654264 kubelet[2018]: E0906 00:06:17.654150 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:18.655035 kubelet[2018]: E0906 00:06:18.654960 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:19.656049 kubelet[2018]: E0906 00:06:19.655966 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:20.656560 kubelet[2018]: E0906 00:06:20.656496 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:21.657299 kubelet[2018]: E0906 00:06:21.657200 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:22.657840 kubelet[2018]: E0906 00:06:22.657778 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:23.335017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117571664.mount: Deactivated successfully. Sep 6 00:06:23.659286 kubelet[2018]: E0906 00:06:23.658789 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:24.659594 kubelet[2018]: E0906 00:06:24.659500 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:25.660661 kubelet[2018]: E0906 00:06:25.660583 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:26.661081 kubelet[2018]: E0906 00:06:26.660991 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:27.439715 env[1647]: time="2025-09-06T00:06:27.439642605Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:27.444342 env[1647]: time="2025-09-06T00:06:27.444261414Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:27.449776 env[1647]: time="2025-09-06T00:06:27.449716743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:27.452939 env[1647]: time="2025-09-06T00:06:27.451532792Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 6 00:06:27.455671 env[1647]: time="2025-09-06T00:06:27.455610495Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 6 00:06:27.459200 env[1647]: time="2025-09-06T00:06:27.459141894Z" level=info msg="CreateContainer within sandbox \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:06:27.485457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1619880325.mount: Deactivated successfully. Sep 6 00:06:27.496743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540227353.mount: Deactivated successfully. Sep 6 00:06:27.512321 env[1647]: time="2025-09-06T00:06:27.512258359Z" level=info msg="CreateContainer within sandbox \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ff487dcf560a8ce2731e2ae2350edc555147c6f8bb6c8255ebf12970db844124\"" Sep 6 00:06:27.514051 env[1647]: time="2025-09-06T00:06:27.514001743Z" level=info msg="StartContainer for \"ff487dcf560a8ce2731e2ae2350edc555147c6f8bb6c8255ebf12970db844124\"" Sep 6 00:06:27.555620 systemd[1]: Started cri-containerd-ff487dcf560a8ce2731e2ae2350edc555147c6f8bb6c8255ebf12970db844124.scope. Sep 6 00:06:27.658685 systemd[1]: cri-containerd-ff487dcf560a8ce2731e2ae2350edc555147c6f8bb6c8255ebf12970db844124.scope: Deactivated successfully. Sep 6 00:06:27.660278 env[1647]: time="2025-09-06T00:06:27.660107747Z" level=info msg="StartContainer for \"ff487dcf560a8ce2731e2ae2350edc555147c6f8bb6c8255ebf12970db844124\" returns successfully" Sep 6 00:06:27.661502 kubelet[2018]: E0906 00:06:27.661137 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:28.478736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff487dcf560a8ce2731e2ae2350edc555147c6f8bb6c8255ebf12970db844124-rootfs.mount: Deactivated successfully. Sep 6 00:06:28.661491 kubelet[2018]: E0906 00:06:28.661417 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:29.266166 env[1647]: time="2025-09-06T00:06:29.266086139Z" level=info msg="shim disconnected" id=ff487dcf560a8ce2731e2ae2350edc555147c6f8bb6c8255ebf12970db844124 Sep 6 00:06:29.266794 env[1647]: time="2025-09-06T00:06:29.266169695Z" level=warning msg="cleaning up after shim disconnected" id=ff487dcf560a8ce2731e2ae2350edc555147c6f8bb6c8255ebf12970db844124 namespace=k8s.io Sep 6 00:06:29.266794 env[1647]: time="2025-09-06T00:06:29.266193729Z" level=info msg="cleaning up dead shim" Sep 6 00:06:29.284607 env[1647]: time="2025-09-06T00:06:29.284519475Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2204 runtime=io.containerd.runc.v2\n" Sep 6 00:06:29.662337 kubelet[2018]: E0906 00:06:29.662178 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:30.060067 env[1647]: time="2025-09-06T00:06:30.059861041Z" level=info msg="CreateContainer within sandbox \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:06:30.061410 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 6 00:06:30.096759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1979590499.mount: Deactivated successfully. Sep 6 00:06:30.118397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3012212519.mount: Deactivated successfully. Sep 6 00:06:30.140241 env[1647]: time="2025-09-06T00:06:30.140147816Z" level=info msg="CreateContainer within sandbox \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0eb36cdb2cd1999cee6db558194d837a4622d1d5aad8fb809262deb4bbe22562\"" Sep 6 00:06:30.141379 env[1647]: time="2025-09-06T00:06:30.141307654Z" level=info msg="StartContainer for \"0eb36cdb2cd1999cee6db558194d837a4622d1d5aad8fb809262deb4bbe22562\"" Sep 6 00:06:30.193423 systemd[1]: Started cri-containerd-0eb36cdb2cd1999cee6db558194d837a4622d1d5aad8fb809262deb4bbe22562.scope. Sep 6 00:06:30.283815 env[1647]: time="2025-09-06T00:06:30.283733168Z" level=info msg="StartContainer for \"0eb36cdb2cd1999cee6db558194d837a4622d1d5aad8fb809262deb4bbe22562\" returns successfully" Sep 6 00:06:30.310383 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:06:30.310864 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:06:30.311183 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:06:30.315599 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:06:30.334137 systemd[1]: cri-containerd-0eb36cdb2cd1999cee6db558194d837a4622d1d5aad8fb809262deb4bbe22562.scope: Deactivated successfully. Sep 6 00:06:30.346511 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:06:30.465961 env[1647]: time="2025-09-06T00:06:30.465865088Z" level=info msg="shim disconnected" id=0eb36cdb2cd1999cee6db558194d837a4622d1d5aad8fb809262deb4bbe22562 Sep 6 00:06:30.466346 env[1647]: time="2025-09-06T00:06:30.466302808Z" level=warning msg="cleaning up after shim disconnected" id=0eb36cdb2cd1999cee6db558194d837a4622d1d5aad8fb809262deb4bbe22562 namespace=k8s.io Sep 6 00:06:30.466502 env[1647]: time="2025-09-06T00:06:30.466472551Z" level=info msg="cleaning up dead shim" Sep 6 00:06:30.481730 env[1647]: time="2025-09-06T00:06:30.481674363Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2273 runtime=io.containerd.runc.v2\n" Sep 6 00:06:30.662788 kubelet[2018]: E0906 00:06:30.662605 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:31.063295 env[1647]: time="2025-09-06T00:06:31.063069101Z" level=info msg="CreateContainer within sandbox \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:06:31.087075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eb36cdb2cd1999cee6db558194d837a4622d1d5aad8fb809262deb4bbe22562-rootfs.mount: Deactivated successfully. Sep 6 00:06:31.087266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968147134.mount: Deactivated successfully. Sep 6 00:06:31.091902 env[1647]: time="2025-09-06T00:06:31.091811773Z" level=info msg="CreateContainer within sandbox \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fc67c631824f19e9bf30a4b7d20d9270a15876a8c1be8b7fd6ff62068fc3ce99\"" Sep 6 00:06:31.093242 env[1647]: time="2025-09-06T00:06:31.093188720Z" level=info msg="StartContainer for \"fc67c631824f19e9bf30a4b7d20d9270a15876a8c1be8b7fd6ff62068fc3ce99\"" Sep 6 00:06:31.139129 systemd[1]: Started cri-containerd-fc67c631824f19e9bf30a4b7d20d9270a15876a8c1be8b7fd6ff62068fc3ce99.scope. Sep 6 00:06:31.233669 env[1647]: time="2025-09-06T00:06:31.233581830Z" level=info msg="StartContainer for \"fc67c631824f19e9bf30a4b7d20d9270a15876a8c1be8b7fd6ff62068fc3ce99\" returns successfully" Sep 6 00:06:31.241176 systemd[1]: cri-containerd-fc67c631824f19e9bf30a4b7d20d9270a15876a8c1be8b7fd6ff62068fc3ce99.scope: Deactivated successfully. Sep 6 00:06:31.279523 env[1647]: time="2025-09-06T00:06:31.279453759Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:31.282005 env[1647]: time="2025-09-06T00:06:31.281936412Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:31.285745 env[1647]: time="2025-09-06T00:06:31.285677185Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:31.289620 env[1647]: time="2025-09-06T00:06:31.289564123Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:31.290331 env[1647]: time="2025-09-06T00:06:31.290267244Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 6 00:06:31.298870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc67c631824f19e9bf30a4b7d20d9270a15876a8c1be8b7fd6ff62068fc3ce99-rootfs.mount: Deactivated successfully. Sep 6 00:06:31.305363 env[1647]: time="2025-09-06T00:06:31.305281571Z" level=info msg="CreateContainer within sandbox \"4991e9fe7e7ccff3cda75a81de1e778d9df4b94ae9382102602fd1ba48ff04b7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:06:31.663221 kubelet[2018]: E0906 00:06:31.663158 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:31.762780 env[1647]: time="2025-09-06T00:06:31.762698617Z" level=info msg="CreateContainer within sandbox \"4991e9fe7e7ccff3cda75a81de1e778d9df4b94ae9382102602fd1ba48ff04b7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c4a644b26cbeed9288a9d02b31346830b85fb204b9fa7d748d03f92b62cf15c3\"" Sep 6 00:06:31.764129 env[1647]: time="2025-09-06T00:06:31.764052137Z" level=info msg="StartContainer for \"c4a644b26cbeed9288a9d02b31346830b85fb204b9fa7d748d03f92b62cf15c3\"" Sep 6 00:06:31.767475 env[1647]: time="2025-09-06T00:06:31.767329240Z" level=info msg="shim disconnected" id=fc67c631824f19e9bf30a4b7d20d9270a15876a8c1be8b7fd6ff62068fc3ce99 Sep 6 00:06:31.768244 env[1647]: time="2025-09-06T00:06:31.768195825Z" level=warning msg="cleaning up after shim disconnected" id=fc67c631824f19e9bf30a4b7d20d9270a15876a8c1be8b7fd6ff62068fc3ce99 namespace=k8s.io Sep 6 00:06:31.768451 env[1647]: time="2025-09-06T00:06:31.768420836Z" level=info msg="cleaning up dead shim" Sep 6 00:06:31.790319 env[1647]: time="2025-09-06T00:06:31.790259024Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2333 runtime=io.containerd.runc.v2\n" Sep 6 00:06:31.806853 systemd[1]: Started cri-containerd-c4a644b26cbeed9288a9d02b31346830b85fb204b9fa7d748d03f92b62cf15c3.scope. Sep 6 00:06:31.895057 env[1647]: time="2025-09-06T00:06:31.894986011Z" level=info msg="StartContainer for \"c4a644b26cbeed9288a9d02b31346830b85fb204b9fa7d748d03f92b62cf15c3\" returns successfully" Sep 6 00:06:32.073023 env[1647]: time="2025-09-06T00:06:32.071138838Z" level=info msg="CreateContainer within sandbox \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:06:32.088813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160217454.mount: Deactivated successfully. Sep 6 00:06:32.107127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2014391936.mount: Deactivated successfully. Sep 6 00:06:32.126189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914417716.mount: Deactivated successfully. Sep 6 00:06:32.128615 env[1647]: time="2025-09-06T00:06:32.128529632Z" level=info msg="CreateContainer within sandbox \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"758badd50df18c15ccf054b49950cfb538f3be03b2e1097425504b7e7b763322\"" Sep 6 00:06:32.130237 env[1647]: time="2025-09-06T00:06:32.130149598Z" level=info msg="StartContainer for \"758badd50df18c15ccf054b49950cfb538f3be03b2e1097425504b7e7b763322\"" Sep 6 00:06:32.174443 systemd[1]: Started cri-containerd-758badd50df18c15ccf054b49950cfb538f3be03b2e1097425504b7e7b763322.scope. Sep 6 00:06:32.266625 systemd[1]: cri-containerd-758badd50df18c15ccf054b49950cfb538f3be03b2e1097425504b7e7b763322.scope: Deactivated successfully. Sep 6 00:06:32.270866 env[1647]: time="2025-09-06T00:06:32.270723550Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9d915df_2c5a_45af_a3c3_7cbbd759f718.slice/cri-containerd-758badd50df18c15ccf054b49950cfb538f3be03b2e1097425504b7e7b763322.scope/memory.events\": no such file or directory" Sep 6 00:06:32.273742 env[1647]: time="2025-09-06T00:06:32.273664409Z" level=info msg="StartContainer for \"758badd50df18c15ccf054b49950cfb538f3be03b2e1097425504b7e7b763322\" returns successfully" Sep 6 00:06:32.315757 env[1647]: time="2025-09-06T00:06:32.315693018Z" level=info msg="shim disconnected" id=758badd50df18c15ccf054b49950cfb538f3be03b2e1097425504b7e7b763322 Sep 6 00:06:32.316615 env[1647]: time="2025-09-06T00:06:32.316573034Z" level=warning msg="cleaning up after shim disconnected" id=758badd50df18c15ccf054b49950cfb538f3be03b2e1097425504b7e7b763322 namespace=k8s.io Sep 6 00:06:32.316769 env[1647]: time="2025-09-06T00:06:32.316736519Z" level=info msg="cleaning up dead shim" Sep 6 00:06:32.337380 env[1647]: time="2025-09-06T00:06:32.336700283Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2466 runtime=io.containerd.runc.v2\n" Sep 6 00:06:32.663442 kubelet[2018]: E0906 00:06:32.663260 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:33.092584 env[1647]: time="2025-09-06T00:06:33.092346505Z" level=info msg="CreateContainer within sandbox \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:06:33.111713 kubelet[2018]: I0906 00:06:33.111581 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tnkh5" podStartSLOduration=3.7498708020000002 podStartE2EDuration="18.111557533s" podCreationTimestamp="2025-09-06 00:06:15 +0000 UTC" firstStartedPulling="2025-09-06 00:06:16.932300531 +0000 UTC m=+4.155670295" lastFinishedPulling="2025-09-06 00:06:31.293987262 +0000 UTC m=+18.517357026" observedRunningTime="2025-09-06 00:06:32.109689569 +0000 UTC m=+19.333059345" watchObservedRunningTime="2025-09-06 00:06:33.111557533 +0000 UTC m=+20.334927297" Sep 6 00:06:33.124620 env[1647]: time="2025-09-06T00:06:33.124551159Z" level=info msg="CreateContainer within sandbox \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553\"" Sep 6 00:06:33.126065 env[1647]: time="2025-09-06T00:06:33.125998476Z" level=info msg="StartContainer for \"b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553\"" Sep 6 00:06:33.166568 systemd[1]: Started cri-containerd-b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553.scope. Sep 6 00:06:33.255312 env[1647]: time="2025-09-06T00:06:33.255243977Z" level=info msg="StartContainer for \"b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553\" returns successfully" Sep 6 00:06:33.414837 kubelet[2018]: I0906 00:06:33.413941 2018 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 00:06:33.597975 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 00:06:33.664382 kubelet[2018]: E0906 00:06:33.664312 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:34.087913 systemd[1]: run-containerd-runc-k8s.io-b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553-runc.sRhaAL.mount: Deactivated successfully. Sep 6 00:06:34.419961 kernel: Initializing XFRM netlink socket Sep 6 00:06:34.427970 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 00:06:34.652800 kubelet[2018]: E0906 00:06:34.652722 2018 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:34.665235 kubelet[2018]: E0906 00:06:34.665070 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:35.666035 kubelet[2018]: E0906 00:06:35.665951 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:36.248057 systemd-networkd[1367]: cilium_host: Link UP Sep 6 00:06:36.248451 systemd-networkd[1367]: cilium_net: Link UP Sep 6 00:06:36.251771 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 00:06:36.251977 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:06:36.250266 systemd-networkd[1367]: cilium_net: Gained carrier Sep 6 00:06:36.252712 systemd-networkd[1367]: cilium_host: Gained carrier Sep 6 00:06:36.256649 (udev-worker)[2440]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:06:36.266811 (udev-worker)[2701]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:06:36.393223 systemd-networkd[1367]: cilium_host: Gained IPv6LL Sep 6 00:06:36.449768 (udev-worker)[2712]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:06:36.457980 systemd-networkd[1367]: cilium_vxlan: Link UP Sep 6 00:06:36.457996 systemd-networkd[1367]: cilium_vxlan: Gained carrier Sep 6 00:06:36.666330 kubelet[2018]: E0906 00:06:36.666167 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:37.001211 systemd-networkd[1367]: cilium_net: Gained IPv6LL Sep 6 00:06:37.098941 kernel: NET: Registered PF_ALG protocol family Sep 6 00:06:37.667072 kubelet[2018]: E0906 00:06:37.666967 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:38.153509 systemd-networkd[1367]: cilium_vxlan: Gained IPv6LL Sep 6 00:06:38.402695 kubelet[2018]: I0906 00:06:38.402612 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zqcc9" podStartSLOduration=12.851561873 podStartE2EDuration="23.402589625s" podCreationTimestamp="2025-09-06 00:06:15 +0000 UTC" firstStartedPulling="2025-09-06 00:06:16.903837055 +0000 UTC m=+4.127206819" lastFinishedPulling="2025-09-06 00:06:27.454864795 +0000 UTC m=+14.678234571" observedRunningTime="2025-09-06 00:06:34.127280752 +0000 UTC m=+21.350650564" watchObservedRunningTime="2025-09-06 00:06:38.402589625 +0000 UTC m=+25.625959389" Sep 6 00:06:38.415497 systemd[1]: Created slice kubepods-besteffort-pod02b221f9_4815_4c95_a4d4_a6b7c71b8b2d.slice. Sep 6 00:06:38.496650 kubelet[2018]: I0906 00:06:38.496601 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcqwd\" (UniqueName: \"kubernetes.io/projected/02b221f9-4815-4c95-a4d4-a6b7c71b8b2d-kube-api-access-xcqwd\") pod \"nginx-deployment-7fcdb87857-gxwqz\" (UID: \"02b221f9-4815-4c95-a4d4-a6b7c71b8b2d\") " pod="default/nginx-deployment-7fcdb87857-gxwqz" Sep 6 00:06:38.639408 (udev-worker)[2710]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:06:38.641655 systemd-networkd[1367]: lxc_health: Link UP Sep 6 00:06:38.668931 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:06:38.669167 kubelet[2018]: E0906 00:06:38.669043 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:38.669809 systemd-networkd[1367]: lxc_health: Gained carrier Sep 6 00:06:38.724853 env[1647]: time="2025-09-06T00:06:38.724769969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gxwqz,Uid:02b221f9-4815-4c95-a4d4-a6b7c71b8b2d,Namespace:default,Attempt:0,}" Sep 6 00:06:39.308023 systemd-networkd[1367]: lxc24ea497b9d7b: Link UP Sep 6 00:06:39.334971 kernel: eth0: renamed from tmp9f1a8 Sep 6 00:06:39.353212 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc24ea497b9d7b: link becomes ready Sep 6 00:06:39.353579 systemd-networkd[1367]: lxc24ea497b9d7b: Gained carrier Sep 6 00:06:39.670266 kubelet[2018]: E0906 00:06:39.670098 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:39.881778 systemd-networkd[1367]: lxc_health: Gained IPv6LL Sep 6 00:06:40.649687 systemd-networkd[1367]: lxc24ea497b9d7b: Gained IPv6LL Sep 6 00:06:40.671267 kubelet[2018]: E0906 00:06:40.671209 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:41.673071 kubelet[2018]: E0906 00:06:41.673002 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:42.588703 amazon-ssm-agent[1659]: 2025-09-06 00:06:42 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 6 00:06:42.674503 kubelet[2018]: E0906 00:06:42.674411 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:43.674735 kubelet[2018]: E0906 00:06:43.674657 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:44.405052 update_engine[1642]: I0906 00:06:44.404964 1642 update_attempter.cc:509] Updating boot flags... Sep 6 00:06:44.675687 kubelet[2018]: E0906 00:06:44.675214 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:45.676432 kubelet[2018]: E0906 00:06:45.676358 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:46.676803 kubelet[2018]: E0906 00:06:46.676728 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:47.677494 kubelet[2018]: E0906 00:06:47.677398 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:48.678410 kubelet[2018]: E0906 00:06:48.678322 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:48.704197 env[1647]: time="2025-09-06T00:06:48.704061483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:48.704952 env[1647]: time="2025-09-06T00:06:48.704140311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:48.704952 env[1647]: time="2025-09-06T00:06:48.704169764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:48.704952 env[1647]: time="2025-09-06T00:06:48.704553773Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f1a8b1094b4ee478621637c9a20060bf32614a1a85762dcda40b72dd0c5e0d9 pid=3259 runtime=io.containerd.runc.v2 Sep 6 00:06:48.734142 systemd[1]: Started cri-containerd-9f1a8b1094b4ee478621637c9a20060bf32614a1a85762dcda40b72dd0c5e0d9.scope. Sep 6 00:06:48.744414 systemd[1]: run-containerd-runc-k8s.io-9f1a8b1094b4ee478621637c9a20060bf32614a1a85762dcda40b72dd0c5e0d9-runc.yNWZXH.mount: Deactivated successfully. Sep 6 00:06:48.832613 env[1647]: time="2025-09-06T00:06:48.832548187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gxwqz,Uid:02b221f9-4815-4c95-a4d4-a6b7c71b8b2d,Namespace:default,Attempt:0,} returns sandbox id \"9f1a8b1094b4ee478621637c9a20060bf32614a1a85762dcda40b72dd0c5e0d9\"" Sep 6 00:06:48.835010 env[1647]: time="2025-09-06T00:06:48.834954151Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 6 00:06:49.679404 kubelet[2018]: E0906 00:06:49.679327 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:50.680401 kubelet[2018]: E0906 00:06:50.680328 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:51.680574 kubelet[2018]: E0906 00:06:51.680494 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:52.534530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2085548131.mount: Deactivated successfully. Sep 6 00:06:52.681478 kubelet[2018]: E0906 00:06:52.681395 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:53.682145 kubelet[2018]: E0906 00:06:53.682070 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:54.652523 kubelet[2018]: E0906 00:06:54.652451 2018 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:54.682311 kubelet[2018]: E0906 00:06:54.682241 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:54.812629 env[1647]: time="2025-09-06T00:06:54.812536679Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:54.815355 env[1647]: time="2025-09-06T00:06:54.815280144Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:54.819441 env[1647]: time="2025-09-06T00:06:54.819373426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:54.823697 env[1647]: time="2025-09-06T00:06:54.823639024Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:06:54.827334 env[1647]: time="2025-09-06T00:06:54.825688425Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 6 00:06:54.831209 env[1647]: time="2025-09-06T00:06:54.831155336Z" level=info msg="CreateContainer within sandbox \"9f1a8b1094b4ee478621637c9a20060bf32614a1a85762dcda40b72dd0c5e0d9\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 6 00:06:54.853106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3192532682.mount: Deactivated successfully. Sep 6 00:06:54.860849 env[1647]: time="2025-09-06T00:06:54.860763103Z" level=info msg="CreateContainer within sandbox \"9f1a8b1094b4ee478621637c9a20060bf32614a1a85762dcda40b72dd0c5e0d9\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"da244d92abeb0aa45df16c45420dab70ae710217f316326c434f629c1ec3692c\"" Sep 6 00:06:54.862008 env[1647]: time="2025-09-06T00:06:54.861847367Z" level=info msg="StartContainer for \"da244d92abeb0aa45df16c45420dab70ae710217f316326c434f629c1ec3692c\"" Sep 6 00:06:54.910098 systemd[1]: Started cri-containerd-da244d92abeb0aa45df16c45420dab70ae710217f316326c434f629c1ec3692c.scope. Sep 6 00:06:55.004445 env[1647]: time="2025-09-06T00:06:55.004379566Z" level=info msg="StartContainer for \"da244d92abeb0aa45df16c45420dab70ae710217f316326c434f629c1ec3692c\" returns successfully" Sep 6 00:06:55.174464 kubelet[2018]: I0906 00:06:55.173929 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-gxwqz" podStartSLOduration=11.179329762 podStartE2EDuration="17.173877529s" podCreationTimestamp="2025-09-06 00:06:38 +0000 UTC" firstStartedPulling="2025-09-06 00:06:48.833992639 +0000 UTC m=+36.057362403" lastFinishedPulling="2025-09-06 00:06:54.828540418 +0000 UTC m=+42.051910170" observedRunningTime="2025-09-06 00:06:55.173679651 +0000 UTC m=+42.397049463" watchObservedRunningTime="2025-09-06 00:06:55.173877529 +0000 UTC m=+42.397247293" Sep 6 00:06:55.682789 kubelet[2018]: E0906 00:06:55.682720 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:55.845685 systemd[1]: run-containerd-runc-k8s.io-da244d92abeb0aa45df16c45420dab70ae710217f316326c434f629c1ec3692c-runc.2g7UKU.mount: Deactivated successfully. Sep 6 00:06:56.684041 kubelet[2018]: E0906 00:06:56.683958 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:57.115336 systemd[1]: Created slice kubepods-besteffort-podaeec8cd7_7bbc_41c1_8aa6_0b193ceaea87.slice. Sep 6 00:06:57.166412 kubelet[2018]: I0906 00:06:57.166352 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/aeec8cd7-7bbc-41c1-8aa6-0b193ceaea87-data\") pod \"nfs-server-provisioner-0\" (UID: \"aeec8cd7-7bbc-41c1-8aa6-0b193ceaea87\") " pod="default/nfs-server-provisioner-0" Sep 6 00:06:57.166648 kubelet[2018]: I0906 00:06:57.166429 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hct4h\" (UniqueName: \"kubernetes.io/projected/aeec8cd7-7bbc-41c1-8aa6-0b193ceaea87-kube-api-access-hct4h\") pod \"nfs-server-provisioner-0\" (UID: \"aeec8cd7-7bbc-41c1-8aa6-0b193ceaea87\") " pod="default/nfs-server-provisioner-0" Sep 6 00:06:57.421952 env[1647]: time="2025-09-06T00:06:57.421324594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:aeec8cd7-7bbc-41c1-8aa6-0b193ceaea87,Namespace:default,Attempt:0,}" Sep 6 00:06:57.483730 (udev-worker)[3353]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:06:57.485094 (udev-worker)[3354]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:06:57.491081 systemd-networkd[1367]: lxc6e96a1352b88: Link UP Sep 6 00:06:57.499066 kernel: eth0: renamed from tmpfa634 Sep 6 00:06:57.512384 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:06:57.512526 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6e96a1352b88: link becomes ready Sep 6 00:06:57.512658 systemd-networkd[1367]: lxc6e96a1352b88: Gained carrier Sep 6 00:06:57.684757 kubelet[2018]: E0906 00:06:57.684539 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:57.864427 env[1647]: time="2025-09-06T00:06:57.864301355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:57.866160 env[1647]: time="2025-09-06T00:06:57.864380647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:57.866160 env[1647]: time="2025-09-06T00:06:57.864407338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:57.866160 env[1647]: time="2025-09-06T00:06:57.864719669Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa6349e1a906c9086b1c8e9b27afc0f60fd8e52f5f0b5b1521ab0f7645ca136f pid=3385 runtime=io.containerd.runc.v2 Sep 6 00:06:57.906844 systemd[1]: Started cri-containerd-fa6349e1a906c9086b1c8e9b27afc0f60fd8e52f5f0b5b1521ab0f7645ca136f.scope. Sep 6 00:06:58.005446 env[1647]: time="2025-09-06T00:06:58.004147928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:aeec8cd7-7bbc-41c1-8aa6-0b193ceaea87,Namespace:default,Attempt:0,} returns sandbox id \"fa6349e1a906c9086b1c8e9b27afc0f60fd8e52f5f0b5b1521ab0f7645ca136f\"" Sep 6 00:06:58.007576 env[1647]: time="2025-09-06T00:06:58.007479007Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 6 00:06:58.287940 systemd[1]: run-containerd-runc-k8s.io-fa6349e1a906c9086b1c8e9b27afc0f60fd8e52f5f0b5b1521ab0f7645ca136f-runc.TQD28l.mount: Deactivated successfully. Sep 6 00:06:58.685755 kubelet[2018]: E0906 00:06:58.685293 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:06:59.018590 systemd-networkd[1367]: lxc6e96a1352b88: Gained IPv6LL Sep 6 00:06:59.686003 kubelet[2018]: E0906 00:06:59.685781 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:00.686639 kubelet[2018]: E0906 00:07:00.686524 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:01.450850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131180245.mount: Deactivated successfully. Sep 6 00:07:01.687616 kubelet[2018]: E0906 00:07:01.687521 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:02.688124 kubelet[2018]: E0906 00:07:02.688010 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:03.689048 kubelet[2018]: E0906 00:07:03.688978 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:04.689929 kubelet[2018]: E0906 00:07:04.689850 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:05.467053 env[1647]: time="2025-09-06T00:07:05.466958999Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:05.472877 env[1647]: time="2025-09-06T00:07:05.472804558Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:05.479186 env[1647]: time="2025-09-06T00:07:05.479095854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:05.483956 env[1647]: time="2025-09-06T00:07:05.483866131Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:05.484772 env[1647]: time="2025-09-06T00:07:05.484726860Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Sep 6 00:07:05.490330 env[1647]: time="2025-09-06T00:07:05.490267080Z" level=info msg="CreateContainer within sandbox \"fa6349e1a906c9086b1c8e9b27afc0f60fd8e52f5f0b5b1521ab0f7645ca136f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 6 00:07:05.513622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount8364712.mount: Deactivated successfully. Sep 6 00:07:05.530464 env[1647]: time="2025-09-06T00:07:05.530338954Z" level=info msg="CreateContainer within sandbox \"fa6349e1a906c9086b1c8e9b27afc0f60fd8e52f5f0b5b1521ab0f7645ca136f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"0f5918257b5a87f24c180ed9cd2503508ccac251803e76ff8b81e0ec4320b5cf\"" Sep 6 00:07:05.531816 env[1647]: time="2025-09-06T00:07:05.531767483Z" level=info msg="StartContainer for \"0f5918257b5a87f24c180ed9cd2503508ccac251803e76ff8b81e0ec4320b5cf\"" Sep 6 00:07:05.569001 systemd[1]: Started cri-containerd-0f5918257b5a87f24c180ed9cd2503508ccac251803e76ff8b81e0ec4320b5cf.scope. Sep 6 00:07:05.645041 env[1647]: time="2025-09-06T00:07:05.644790505Z" level=info msg="StartContainer for \"0f5918257b5a87f24c180ed9cd2503508ccac251803e76ff8b81e0ec4320b5cf\" returns successfully" Sep 6 00:07:05.690435 kubelet[2018]: E0906 00:07:05.690350 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:06.207488 kubelet[2018]: I0906 00:07:06.207350 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.726320228 podStartE2EDuration="9.207305798s" podCreationTimestamp="2025-09-06 00:06:57 +0000 UTC" firstStartedPulling="2025-09-06 00:06:58.00645862 +0000 UTC m=+45.229828384" lastFinishedPulling="2025-09-06 00:07:05.48744419 +0000 UTC m=+52.710813954" observedRunningTime="2025-09-06 00:07:06.205712801 +0000 UTC m=+53.429082553" watchObservedRunningTime="2025-09-06 00:07:06.207305798 +0000 UTC m=+53.430675562" Sep 6 00:07:06.691344 kubelet[2018]: E0906 00:07:06.691262 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:07.691800 kubelet[2018]: E0906 00:07:07.691748 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:08.693237 kubelet[2018]: E0906 00:07:08.693186 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:09.694371 kubelet[2018]: E0906 00:07:09.694293 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:10.695089 kubelet[2018]: E0906 00:07:10.695006 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:11.695536 kubelet[2018]: E0906 00:07:11.695460 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:12.696416 kubelet[2018]: E0906 00:07:12.696336 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:13.697155 kubelet[2018]: E0906 00:07:13.697105 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:14.652982 kubelet[2018]: E0906 00:07:14.652865 2018 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:14.698767 kubelet[2018]: E0906 00:07:14.698725 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:15.132318 systemd[1]: Created slice kubepods-besteffort-pod63cc51c8_5d9c_42b9_9dbc_b5fb2fef969c.slice. Sep 6 00:07:15.196627 kubelet[2018]: I0906 00:07:15.196535 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpl6c\" (UniqueName: \"kubernetes.io/projected/63cc51c8-5d9c-42b9-9dbc-b5fb2fef969c-kube-api-access-qpl6c\") pod \"test-pod-1\" (UID: \"63cc51c8-5d9c-42b9-9dbc-b5fb2fef969c\") " pod="default/test-pod-1" Sep 6 00:07:15.197176 kubelet[2018]: I0906 00:07:15.196863 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ab458f7f-6bcb-4f43-aae5-db01b816e04f\" (UniqueName: \"kubernetes.io/nfs/63cc51c8-5d9c-42b9-9dbc-b5fb2fef969c-pvc-ab458f7f-6bcb-4f43-aae5-db01b816e04f\") pod \"test-pod-1\" (UID: \"63cc51c8-5d9c-42b9-9dbc-b5fb2fef969c\") " pod="default/test-pod-1" Sep 6 00:07:15.347952 kernel: FS-Cache: Loaded Sep 6 00:07:15.402983 kernel: RPC: Registered named UNIX socket transport module. Sep 6 00:07:15.403144 kernel: RPC: Registered udp transport module. Sep 6 00:07:15.403201 kernel: RPC: Registered tcp transport module. Sep 6 00:07:15.407215 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 6 00:07:15.488941 kernel: FS-Cache: Netfs 'nfs' registered for caching Sep 6 00:07:15.699988 kubelet[2018]: E0906 00:07:15.699815 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:15.726015 kernel: NFS: Registering the id_resolver key type Sep 6 00:07:15.726195 kernel: Key type id_resolver registered Sep 6 00:07:15.727747 kernel: Key type id_legacy registered Sep 6 00:07:15.782341 nfsidmap[3506]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Sep 6 00:07:15.788172 nfsidmap[3507]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Sep 6 00:07:16.040442 env[1647]: time="2025-09-06T00:07:16.039777208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:63cc51c8-5d9c-42b9-9dbc-b5fb2fef969c,Namespace:default,Attempt:0,}" Sep 6 00:07:16.098755 (udev-worker)[3494]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:07:16.099668 (udev-worker)[3504]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:07:16.109560 systemd-networkd[1367]: lxc1e1e66852cef: Link UP Sep 6 00:07:16.122933 kernel: eth0: renamed from tmp3ef7d Sep 6 00:07:16.130923 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:07:16.131046 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1e1e66852cef: link becomes ready Sep 6 00:07:16.131133 systemd-networkd[1367]: lxc1e1e66852cef: Gained carrier Sep 6 00:07:16.465361 env[1647]: time="2025-09-06T00:07:16.464625062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:07:16.465361 env[1647]: time="2025-09-06T00:07:16.464704854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:07:16.465689 env[1647]: time="2025-09-06T00:07:16.464732696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:07:16.466174 env[1647]: time="2025-09-06T00:07:16.466044263Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ef7dc521ed5cd332718688810ada276c603f3840420cc2a3b3bb4a48696a2f6 pid=3531 runtime=io.containerd.runc.v2 Sep 6 00:07:16.509636 systemd[1]: run-containerd-runc-k8s.io-3ef7dc521ed5cd332718688810ada276c603f3840420cc2a3b3bb4a48696a2f6-runc.01gqcR.mount: Deactivated successfully. Sep 6 00:07:16.517682 systemd[1]: Started cri-containerd-3ef7dc521ed5cd332718688810ada276c603f3840420cc2a3b3bb4a48696a2f6.scope. Sep 6 00:07:16.592806 env[1647]: time="2025-09-06T00:07:16.592740460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:63cc51c8-5d9c-42b9-9dbc-b5fb2fef969c,Namespace:default,Attempt:0,} returns sandbox id \"3ef7dc521ed5cd332718688810ada276c603f3840420cc2a3b3bb4a48696a2f6\"" Sep 6 00:07:16.595600 env[1647]: time="2025-09-06T00:07:16.595546712Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 6 00:07:16.700456 kubelet[2018]: E0906 00:07:16.700378 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:16.914372 env[1647]: time="2025-09-06T00:07:16.914317463Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:16.919170 env[1647]: time="2025-09-06T00:07:16.919096579Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:16.922678 env[1647]: time="2025-09-06T00:07:16.922610847Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:16.926502 env[1647]: time="2025-09-06T00:07:16.926450322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:16.927936 env[1647]: time="2025-09-06T00:07:16.927848846Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 6 00:07:16.933056 env[1647]: time="2025-09-06T00:07:16.933002107Z" level=info msg="CreateContainer within sandbox \"3ef7dc521ed5cd332718688810ada276c603f3840420cc2a3b3bb4a48696a2f6\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 6 00:07:16.965144 env[1647]: time="2025-09-06T00:07:16.965075278Z" level=info msg="CreateContainer within sandbox \"3ef7dc521ed5cd332718688810ada276c603f3840420cc2a3b3bb4a48696a2f6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9a9cbb1cfda2bc7cd7d258042a933a6c5aff7701f282a310d3a2e980ebc39b4e\"" Sep 6 00:07:16.966594 env[1647]: time="2025-09-06T00:07:16.966541894Z" level=info msg="StartContainer for \"9a9cbb1cfda2bc7cd7d258042a933a6c5aff7701f282a310d3a2e980ebc39b4e\"" Sep 6 00:07:16.997090 systemd[1]: Started cri-containerd-9a9cbb1cfda2bc7cd7d258042a933a6c5aff7701f282a310d3a2e980ebc39b4e.scope. Sep 6 00:07:17.064655 env[1647]: time="2025-09-06T00:07:17.064578850Z" level=info msg="StartContainer for \"9a9cbb1cfda2bc7cd7d258042a933a6c5aff7701f282a310d3a2e980ebc39b4e\" returns successfully" Sep 6 00:07:17.234824 kubelet[2018]: I0906 00:07:17.234745 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.898389387 podStartE2EDuration="20.23472333s" podCreationTimestamp="2025-09-06 00:06:57 +0000 UTC" firstStartedPulling="2025-09-06 00:07:16.594224321 +0000 UTC m=+63.817594085" lastFinishedPulling="2025-09-06 00:07:16.930558276 +0000 UTC m=+64.153928028" observedRunningTime="2025-09-06 00:07:17.234618876 +0000 UTC m=+64.457988652" watchObservedRunningTime="2025-09-06 00:07:17.23472333 +0000 UTC m=+64.458093094" Sep 6 00:07:17.701481 kubelet[2018]: E0906 00:07:17.701272 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:18.089422 systemd-networkd[1367]: lxc1e1e66852cef: Gained IPv6LL Sep 6 00:07:18.701987 kubelet[2018]: E0906 00:07:18.701909 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:19.702650 kubelet[2018]: E0906 00:07:19.702573 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:20.703652 kubelet[2018]: E0906 00:07:20.703605 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:21.705140 kubelet[2018]: E0906 00:07:21.705084 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:22.706459 kubelet[2018]: E0906 00:07:22.706327 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:22.715727 systemd[1]: run-containerd-runc-k8s.io-b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553-runc.5C8s2w.mount: Deactivated successfully. Sep 6 00:07:22.759264 env[1647]: time="2025-09-06T00:07:22.759183079Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:07:22.769225 env[1647]: time="2025-09-06T00:07:22.768973492Z" level=info msg="StopContainer for \"b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553\" with timeout 2 (s)" Sep 6 00:07:22.770015 env[1647]: time="2025-09-06T00:07:22.769967803Z" level=info msg="Stop container \"b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553\" with signal terminated" Sep 6 00:07:22.781036 systemd-networkd[1367]: lxc_health: Link DOWN Sep 6 00:07:22.781050 systemd-networkd[1367]: lxc_health: Lost carrier Sep 6 00:07:22.815548 systemd[1]: cri-containerd-b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553.scope: Deactivated successfully. Sep 6 00:07:22.816171 systemd[1]: cri-containerd-b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553.scope: Consumed 16.152s CPU time. Sep 6 00:07:22.850463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553-rootfs.mount: Deactivated successfully. Sep 6 00:07:23.165450 env[1647]: time="2025-09-06T00:07:23.165383746Z" level=info msg="shim disconnected" id=b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553 Sep 6 00:07:23.165972 env[1647]: time="2025-09-06T00:07:23.165924049Z" level=warning msg="cleaning up after shim disconnected" id=b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553 namespace=k8s.io Sep 6 00:07:23.166138 env[1647]: time="2025-09-06T00:07:23.166106698Z" level=info msg="cleaning up dead shim" Sep 6 00:07:23.180424 env[1647]: time="2025-09-06T00:07:23.180366452Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3660 runtime=io.containerd.runc.v2\n" Sep 6 00:07:23.184226 env[1647]: time="2025-09-06T00:07:23.184164524Z" level=info msg="StopContainer for \"b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553\" returns successfully" Sep 6 00:07:23.185379 env[1647]: time="2025-09-06T00:07:23.185329170Z" level=info msg="StopPodSandbox for \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\"" Sep 6 00:07:23.185750 env[1647]: time="2025-09-06T00:07:23.185711269Z" level=info msg="Container to stop \"ff487dcf560a8ce2731e2ae2350edc555147c6f8bb6c8255ebf12970db844124\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:23.186030 env[1647]: time="2025-09-06T00:07:23.185867793Z" level=info msg="Container to stop \"0eb36cdb2cd1999cee6db558194d837a4622d1d5aad8fb809262deb4bbe22562\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:23.186185 env[1647]: time="2025-09-06T00:07:23.186146171Z" level=info msg="Container to stop \"fc67c631824f19e9bf30a4b7d20d9270a15876a8c1be8b7fd6ff62068fc3ce99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:23.186337 env[1647]: time="2025-09-06T00:07:23.186302983Z" level=info msg="Container to stop \"758badd50df18c15ccf054b49950cfb538f3be03b2e1097425504b7e7b763322\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:23.186492 env[1647]: time="2025-09-06T00:07:23.186456207Z" level=info msg="Container to stop \"b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:23.190061 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b-shm.mount: Deactivated successfully. Sep 6 00:07:23.201409 systemd[1]: cri-containerd-49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b.scope: Deactivated successfully. Sep 6 00:07:23.247342 env[1647]: time="2025-09-06T00:07:23.247271874Z" level=info msg="shim disconnected" id=49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b Sep 6 00:07:23.248025 env[1647]: time="2025-09-06T00:07:23.247970417Z" level=warning msg="cleaning up after shim disconnected" id=49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b namespace=k8s.io Sep 6 00:07:23.248140 env[1647]: time="2025-09-06T00:07:23.248024768Z" level=info msg="cleaning up dead shim" Sep 6 00:07:23.262839 env[1647]: time="2025-09-06T00:07:23.262753710Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3692 runtime=io.containerd.runc.v2\n" Sep 6 00:07:23.263413 env[1647]: time="2025-09-06T00:07:23.263345196Z" level=info msg="TearDown network for sandbox \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\" successfully" Sep 6 00:07:23.263413 env[1647]: time="2025-09-06T00:07:23.263397807Z" level=info msg="StopPodSandbox for \"49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b\" returns successfully" Sep 6 00:07:23.356935 kubelet[2018]: I0906 00:07:23.356821 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cni-path" (OuterVolumeSpecName: "cni-path") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:23.357153 kubelet[2018]: I0906 00:07:23.356733 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cni-path\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.357153 kubelet[2018]: I0906 00:07:23.357052 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9d915df-2c5a-45af-a3c3-7cbbd759f718-clustermesh-secrets\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.357697 kubelet[2018]: I0906 00:07:23.357660 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-etc-cni-netd\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.358063 kubelet[2018]: I0906 00:07:23.358017 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-xtables-lock\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.358257 kubelet[2018]: I0906 00:07:23.358229 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-hostproc\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.358647 kubelet[2018]: I0906 00:07:23.358620 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cilium-cgroup\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.358931 kubelet[2018]: I0906 00:07:23.358875 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz8nt\" (UniqueName: \"kubernetes.io/projected/e9d915df-2c5a-45af-a3c3-7cbbd759f718-kube-api-access-vz8nt\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.359589 kubelet[2018]: I0906 00:07:23.359557 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-bpf-maps\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.359849 kubelet[2018]: I0906 00:07:23.359825 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-lib-modules\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.360121 kubelet[2018]: I0906 00:07:23.360097 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-host-proc-sys-net\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.360281 kubelet[2018]: I0906 00:07:23.360254 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-host-proc-sys-kernel\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.360441 kubelet[2018]: I0906 00:07:23.360414 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9d915df-2c5a-45af-a3c3-7cbbd759f718-hubble-tls\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.361794 kubelet[2018]: I0906 00:07:23.361753 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cilium-run\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.362198 kubelet[2018]: I0906 00:07:23.362169 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cilium-config-path\") pod \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\" (UID: \"e9d915df-2c5a-45af-a3c3-7cbbd759f718\") " Sep 6 00:07:23.362808 kubelet[2018]: I0906 00:07:23.362775 2018 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cni-path\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.363987 kubelet[2018]: I0906 00:07:23.357944 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:23.364187 kubelet[2018]: I0906 00:07:23.358503 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-hostproc" (OuterVolumeSpecName: "hostproc") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:23.364317 kubelet[2018]: I0906 00:07:23.358534 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:23.364425 kubelet[2018]: I0906 00:07:23.358810 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:23.364554 kubelet[2018]: I0906 00:07:23.359766 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:23.364703 kubelet[2018]: I0906 00:07:23.360037 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:23.364818 kubelet[2018]: I0906 00:07:23.361641 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:23.364972 kubelet[2018]: I0906 00:07:23.361689 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:23.365096 kubelet[2018]: I0906 00:07:23.362072 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:23.366800 kubelet[2018]: I0906 00:07:23.366724 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d915df-2c5a-45af-a3c3-7cbbd759f718-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:07:23.373135 kubelet[2018]: I0906 00:07:23.373043 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:07:23.373991 kubelet[2018]: I0906 00:07:23.373943 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d915df-2c5a-45af-a3c3-7cbbd759f718-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:07:23.374219 kubelet[2018]: I0906 00:07:23.373960 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d915df-2c5a-45af-a3c3-7cbbd759f718-kube-api-access-vz8nt" (OuterVolumeSpecName: "kube-api-access-vz8nt") pod "e9d915df-2c5a-45af-a3c3-7cbbd759f718" (UID: "e9d915df-2c5a-45af-a3c3-7cbbd759f718"). InnerVolumeSpecName "kube-api-access-vz8nt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:07:23.465980 kubelet[2018]: I0906 00:07:23.464300 2018 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-lib-modules\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.466298 kubelet[2018]: I0906 00:07:23.466242 2018 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-bpf-maps\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.466670 kubelet[2018]: I0906 00:07:23.466606 2018 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9d915df-2c5a-45af-a3c3-7cbbd759f718-hubble-tls\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.466908 kubelet[2018]: I0906 00:07:23.466855 2018 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cilium-run\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.467152 kubelet[2018]: I0906 00:07:23.467100 2018 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cilium-config-path\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.467325 kubelet[2018]: I0906 00:07:23.467298 2018 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-host-proc-sys-net\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.467517 kubelet[2018]: I0906 00:07:23.467490 2018 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-host-proc-sys-kernel\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.467693 kubelet[2018]: I0906 00:07:23.467668 2018 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-etc-cni-netd\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.467859 kubelet[2018]: I0906 00:07:23.467835 2018 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-xtables-lock\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.468095 kubelet[2018]: I0906 00:07:23.468056 2018 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-hostproc\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.468248 kubelet[2018]: I0906 00:07:23.468224 2018 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9d915df-2c5a-45af-a3c3-7cbbd759f718-clustermesh-secrets\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.468394 kubelet[2018]: I0906 00:07:23.468372 2018 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vz8nt\" (UniqueName: \"kubernetes.io/projected/e9d915df-2c5a-45af-a3c3-7cbbd759f718-kube-api-access-vz8nt\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.468548 kubelet[2018]: I0906 00:07:23.468523 2018 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9d915df-2c5a-45af-a3c3-7cbbd759f718-cilium-cgroup\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:23.707838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49d4ebfd87059dfb8204f62b5e50f1ecb4b2097ea3359673ff3671ee6884586b-rootfs.mount: Deactivated successfully. Sep 6 00:07:23.708614 kubelet[2018]: E0906 00:07:23.708230 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:23.708068 systemd[1]: var-lib-kubelet-pods-e9d915df\x2d2c5a\x2d45af\x2da3c3\x2d7cbbd759f718-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvz8nt.mount: Deactivated successfully. Sep 6 00:07:23.708256 systemd[1]: var-lib-kubelet-pods-e9d915df\x2d2c5a\x2d45af\x2da3c3\x2d7cbbd759f718-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:07:23.708434 systemd[1]: var-lib-kubelet-pods-e9d915df\x2d2c5a\x2d45af\x2da3c3\x2d7cbbd759f718-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:07:24.254254 kubelet[2018]: I0906 00:07:24.254216 2018 scope.go:117] "RemoveContainer" containerID="b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553" Sep 6 00:07:24.263773 env[1647]: time="2025-09-06T00:07:24.263694062Z" level=info msg="RemoveContainer for \"b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553\"" Sep 6 00:07:24.264385 systemd[1]: Removed slice kubepods-burstable-pode9d915df_2c5a_45af_a3c3_7cbbd759f718.slice. Sep 6 00:07:24.264610 systemd[1]: kubepods-burstable-pode9d915df_2c5a_45af_a3c3_7cbbd759f718.slice: Consumed 16.450s CPU time. Sep 6 00:07:24.270681 env[1647]: time="2025-09-06T00:07:24.270620446Z" level=info msg="RemoveContainer for \"b6a82af86a23f62b983f1b1610f54b610c131ce48c58ee498cc6350426af6553\" returns successfully" Sep 6 00:07:24.271512 kubelet[2018]: I0906 00:07:24.271443 2018 scope.go:117] "RemoveContainer" containerID="758badd50df18c15ccf054b49950cfb538f3be03b2e1097425504b7e7b763322" Sep 6 00:07:24.274177 env[1647]: time="2025-09-06T00:07:24.274102131Z" level=info msg="RemoveContainer for \"758badd50df18c15ccf054b49950cfb538f3be03b2e1097425504b7e7b763322\"" Sep 6 00:07:24.279141 env[1647]: time="2025-09-06T00:07:24.279057465Z" level=info msg="RemoveContainer for \"758badd50df18c15ccf054b49950cfb538f3be03b2e1097425504b7e7b763322\" returns successfully" Sep 6 00:07:24.279735 kubelet[2018]: I0906 00:07:24.279701 2018 scope.go:117] "RemoveContainer" containerID="fc67c631824f19e9bf30a4b7d20d9270a15876a8c1be8b7fd6ff62068fc3ce99" Sep 6 00:07:24.282109 env[1647]: time="2025-09-06T00:07:24.282036996Z" level=info msg="RemoveContainer for \"fc67c631824f19e9bf30a4b7d20d9270a15876a8c1be8b7fd6ff62068fc3ce99\"" Sep 6 00:07:24.287545 env[1647]: time="2025-09-06T00:07:24.287472426Z" level=info msg="RemoveContainer for \"fc67c631824f19e9bf30a4b7d20d9270a15876a8c1be8b7fd6ff62068fc3ce99\" returns successfully" Sep 6 00:07:24.288909 kubelet[2018]: I0906 00:07:24.288828 2018 scope.go:117] "RemoveContainer" containerID="0eb36cdb2cd1999cee6db558194d837a4622d1d5aad8fb809262deb4bbe22562" Sep 6 00:07:24.291658 env[1647]: time="2025-09-06T00:07:24.291584190Z" level=info msg="RemoveContainer for \"0eb36cdb2cd1999cee6db558194d837a4622d1d5aad8fb809262deb4bbe22562\"" Sep 6 00:07:24.303597 env[1647]: time="2025-09-06T00:07:24.303511259Z" level=info msg="RemoveContainer for \"0eb36cdb2cd1999cee6db558194d837a4622d1d5aad8fb809262deb4bbe22562\" returns successfully" Sep 6 00:07:24.304196 kubelet[2018]: I0906 00:07:24.304043 2018 scope.go:117] "RemoveContainer" containerID="ff487dcf560a8ce2731e2ae2350edc555147c6f8bb6c8255ebf12970db844124" Sep 6 00:07:24.307008 env[1647]: time="2025-09-06T00:07:24.306942169Z" level=info msg="RemoveContainer for \"ff487dcf560a8ce2731e2ae2350edc555147c6f8bb6c8255ebf12970db844124\"" Sep 6 00:07:24.312456 env[1647]: time="2025-09-06T00:07:24.312384631Z" level=info msg="RemoveContainer for \"ff487dcf560a8ce2731e2ae2350edc555147c6f8bb6c8255ebf12970db844124\" returns successfully" Sep 6 00:07:24.710613 kubelet[2018]: E0906 00:07:24.709367 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:24.875202 kubelet[2018]: E0906 00:07:24.875127 2018 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:25.013769 kubelet[2018]: I0906 00:07:25.013666 2018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d915df-2c5a-45af-a3c3-7cbbd759f718" path="/var/lib/kubelet/pods/e9d915df-2c5a-45af-a3c3-7cbbd759f718/volumes" Sep 6 00:07:25.709906 kubelet[2018]: E0906 00:07:25.709818 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:26.264363 kubelet[2018]: I0906 00:07:26.264270 2018 setters.go:602] "Node became not ready" node="172.31.27.196" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:07:26Z","lastTransitionTime":"2025-09-06T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:07:26.710447 kubelet[2018]: E0906 00:07:26.709998 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:26.886234 kubelet[2018]: I0906 00:07:26.886187 2018 memory_manager.go:355] "RemoveStaleState removing state" podUID="e9d915df-2c5a-45af-a3c3-7cbbd759f718" containerName="cilium-agent" Sep 6 00:07:26.896110 systemd[1]: Created slice kubepods-besteffort-pod0e662379_a40b_44d3_85f9_8a76c7dea242.slice. Sep 6 00:07:26.964687 systemd[1]: Created slice kubepods-burstable-pod8ee57c0a_94c1_4772_9c1c_2334c469d752.slice. Sep 6 00:07:26.990402 kubelet[2018]: I0906 00:07:26.990330 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q896\" (UniqueName: \"kubernetes.io/projected/0e662379-a40b-44d3-85f9-8a76c7dea242-kube-api-access-5q896\") pod \"cilium-operator-6c4d7847fc-kfs2z\" (UID: \"0e662379-a40b-44d3-85f9-8a76c7dea242\") " pod="kube-system/cilium-operator-6c4d7847fc-kfs2z" Sep 6 00:07:26.990605 kubelet[2018]: I0906 00:07:26.990408 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e662379-a40b-44d3-85f9-8a76c7dea242-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-kfs2z\" (UID: \"0e662379-a40b-44d3-85f9-8a76c7dea242\") " pod="kube-system/cilium-operator-6c4d7847fc-kfs2z" Sep 6 00:07:27.091116 kubelet[2018]: I0906 00:07:27.091044 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56k7p\" (UniqueName: \"kubernetes.io/projected/8ee57c0a-94c1-4772-9c1c-2334c469d752-kube-api-access-56k7p\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.091274 kubelet[2018]: I0906 00:07:27.091146 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-run\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.091274 kubelet[2018]: I0906 00:07:27.091213 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-bpf-maps\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.091426 kubelet[2018]: I0906 00:07:27.091297 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-lib-modules\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.091426 kubelet[2018]: I0906 00:07:27.091339 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ee57c0a-94c1-4772-9c1c-2334c469d752-clustermesh-secrets\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.091426 kubelet[2018]: I0906 00:07:27.091402 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-config-path\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.091609 kubelet[2018]: I0906 00:07:27.091491 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-host-proc-sys-kernel\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.091609 kubelet[2018]: I0906 00:07:27.091553 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-host-proc-sys-net\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.091609 kubelet[2018]: I0906 00:07:27.091603 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-xtables-lock\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.091769 kubelet[2018]: I0906 00:07:27.091665 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-ipsec-secrets\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.091769 kubelet[2018]: I0906 00:07:27.091728 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-hostproc\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.091942 kubelet[2018]: I0906 00:07:27.091780 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-cgroup\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.091942 kubelet[2018]: I0906 00:07:27.091850 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-cni-path\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.091942 kubelet[2018]: I0906 00:07:27.091926 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-etc-cni-netd\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.092132 kubelet[2018]: I0906 00:07:27.091971 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ee57c0a-94c1-4772-9c1c-2334c469d752-hubble-tls\") pod \"cilium-x8k9j\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " pod="kube-system/cilium-x8k9j" Sep 6 00:07:27.230583 env[1647]: time="2025-09-06T00:07:27.228333480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kfs2z,Uid:0e662379-a40b-44d3-85f9-8a76c7dea242,Namespace:kube-system,Attempt:0,}" Sep 6 00:07:27.272648 env[1647]: time="2025-09-06T00:07:27.272480504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:07:27.272900 env[1647]: time="2025-09-06T00:07:27.272583277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:07:27.273291 env[1647]: time="2025-09-06T00:07:27.272863034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:07:27.273692 env[1647]: time="2025-09-06T00:07:27.273574260Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97ced395d6a5125bfda5e89f5fae55bb6a4c6f880d9e09ceaeb8b8d6b50ee865 pid=3725 runtime=io.containerd.runc.v2 Sep 6 00:07:27.276356 env[1647]: time="2025-09-06T00:07:27.276295690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x8k9j,Uid:8ee57c0a-94c1-4772-9c1c-2334c469d752,Namespace:kube-system,Attempt:0,}" Sep 6 00:07:27.303902 systemd[1]: Started cri-containerd-97ced395d6a5125bfda5e89f5fae55bb6a4c6f880d9e09ceaeb8b8d6b50ee865.scope. Sep 6 00:07:27.323596 env[1647]: time="2025-09-06T00:07:27.323470926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:07:27.324869 env[1647]: time="2025-09-06T00:07:27.324673684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:07:27.325243 env[1647]: time="2025-09-06T00:07:27.325167627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:07:27.328702 env[1647]: time="2025-09-06T00:07:27.328520600Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/60d3d5d821a0ba839716c08bdd1e140ed84bc9d0e3a0895645b185b952e54e66 pid=3753 runtime=io.containerd.runc.v2 Sep 6 00:07:27.358328 systemd[1]: Started cri-containerd-60d3d5d821a0ba839716c08bdd1e140ed84bc9d0e3a0895645b185b952e54e66.scope. Sep 6 00:07:27.429974 env[1647]: time="2025-09-06T00:07:27.429857722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kfs2z,Uid:0e662379-a40b-44d3-85f9-8a76c7dea242,Namespace:kube-system,Attempt:0,} returns sandbox id \"97ced395d6a5125bfda5e89f5fae55bb6a4c6f880d9e09ceaeb8b8d6b50ee865\"" Sep 6 00:07:27.433588 env[1647]: time="2025-09-06T00:07:27.433407719Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:07:27.443328 env[1647]: time="2025-09-06T00:07:27.443268722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x8k9j,Uid:8ee57c0a-94c1-4772-9c1c-2334c469d752,Namespace:kube-system,Attempt:0,} returns sandbox id \"60d3d5d821a0ba839716c08bdd1e140ed84bc9d0e3a0895645b185b952e54e66\"" Sep 6 00:07:27.449786 env[1647]: time="2025-09-06T00:07:27.449700525Z" level=info msg="CreateContainer within sandbox \"60d3d5d821a0ba839716c08bdd1e140ed84bc9d0e3a0895645b185b952e54e66\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:07:27.474389 env[1647]: time="2025-09-06T00:07:27.474301207Z" level=info msg="CreateContainer within sandbox \"60d3d5d821a0ba839716c08bdd1e140ed84bc9d0e3a0895645b185b952e54e66\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60\"" Sep 6 00:07:27.475100 env[1647]: time="2025-09-06T00:07:27.475053343Z" level=info msg="StartContainer for \"8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60\"" Sep 6 00:07:27.506204 systemd[1]: Started cri-containerd-8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60.scope. Sep 6 00:07:27.543219 systemd[1]: cri-containerd-8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60.scope: Deactivated successfully. Sep 6 00:07:27.570298 env[1647]: time="2025-09-06T00:07:27.570194557Z" level=info msg="shim disconnected" id=8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60 Sep 6 00:07:27.570298 env[1647]: time="2025-09-06T00:07:27.570281526Z" level=warning msg="cleaning up after shim disconnected" id=8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60 namespace=k8s.io Sep 6 00:07:27.570708 env[1647]: time="2025-09-06T00:07:27.570306907Z" level=info msg="cleaning up dead shim" Sep 6 00:07:27.585554 env[1647]: time="2025-09-06T00:07:27.585467363Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3824 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:07:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:07:27.587471 env[1647]: time="2025-09-06T00:07:27.587295746Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Sep 6 00:07:27.590093 env[1647]: time="2025-09-06T00:07:27.590018112Z" level=error msg="Failed to pipe stdout of container \"8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60\"" error="reading from a closed fifo" Sep 6 00:07:27.591247 env[1647]: time="2025-09-06T00:07:27.591176383Z" level=error msg="Failed to pipe stderr of container \"8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60\"" error="reading from a closed fifo" Sep 6 00:07:27.595042 env[1647]: time="2025-09-06T00:07:27.594950287Z" level=error msg="StartContainer for \"8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:07:27.595551 kubelet[2018]: E0906 00:07:27.595485 2018 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60" Sep 6 00:07:27.596324 kubelet[2018]: E0906 00:07:27.596274 2018 kuberuntime_manager.go:1341] "Unhandled Error" err=< Sep 6 00:07:27.596324 kubelet[2018]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:07:27.596324 kubelet[2018]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:07:27.596324 kubelet[2018]: rm /hostbin/cilium-mount Sep 6 00:07:27.596639 kubelet[2018]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-x8k9j_kube-system(8ee57c0a-94c1-4772-9c1c-2334c469d752): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:07:27.596639 kubelet[2018]: > logger="UnhandledError" Sep 6 00:07:27.598044 kubelet[2018]: E0906 00:07:27.597966 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x8k9j" podUID="8ee57c0a-94c1-4772-9c1c-2334c469d752" Sep 6 00:07:27.710762 kubelet[2018]: E0906 00:07:27.710680 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:28.272554 env[1647]: time="2025-09-06T00:07:28.272473310Z" level=info msg="StopPodSandbox for \"60d3d5d821a0ba839716c08bdd1e140ed84bc9d0e3a0895645b185b952e54e66\"" Sep 6 00:07:28.273294 env[1647]: time="2025-09-06T00:07:28.272597900Z" level=info msg="Container to stop \"8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:28.276326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60d3d5d821a0ba839716c08bdd1e140ed84bc9d0e3a0895645b185b952e54e66-shm.mount: Deactivated successfully. Sep 6 00:07:28.294569 systemd[1]: cri-containerd-60d3d5d821a0ba839716c08bdd1e140ed84bc9d0e3a0895645b185b952e54e66.scope: Deactivated successfully. Sep 6 00:07:28.338271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60d3d5d821a0ba839716c08bdd1e140ed84bc9d0e3a0895645b185b952e54e66-rootfs.mount: Deactivated successfully. Sep 6 00:07:28.351981 env[1647]: time="2025-09-06T00:07:28.351868348Z" level=info msg="shim disconnected" id=60d3d5d821a0ba839716c08bdd1e140ed84bc9d0e3a0895645b185b952e54e66 Sep 6 00:07:28.351981 env[1647]: time="2025-09-06T00:07:28.351977241Z" level=warning msg="cleaning up after shim disconnected" id=60d3d5d821a0ba839716c08bdd1e140ed84bc9d0e3a0895645b185b952e54e66 namespace=k8s.io Sep 6 00:07:28.352330 env[1647]: time="2025-09-06T00:07:28.352001902Z" level=info msg="cleaning up dead shim" Sep 6 00:07:28.373357 env[1647]: time="2025-09-06T00:07:28.373291447Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3857 runtime=io.containerd.runc.v2\n" Sep 6 00:07:28.373959 env[1647]: time="2025-09-06T00:07:28.373872490Z" level=info msg="TearDown network for sandbox \"60d3d5d821a0ba839716c08bdd1e140ed84bc9d0e3a0895645b185b952e54e66\" successfully" Sep 6 00:07:28.374132 env[1647]: time="2025-09-06T00:07:28.373958246Z" level=info msg="StopPodSandbox for \"60d3d5d821a0ba839716c08bdd1e140ed84bc9d0e3a0895645b185b952e54e66\" returns successfully" Sep 6 00:07:28.503741 kubelet[2018]: I0906 00:07:28.503526 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-host-proc-sys-kernel\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.504053 kubelet[2018]: I0906 00:07:28.503786 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ee57c0a-94c1-4772-9c1c-2334c469d752-clustermesh-secrets\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.504053 kubelet[2018]: I0906 00:07:28.503634 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:28.506985 kubelet[2018]: I0906 00:07:28.504258 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-host-proc-sys-net\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.506985 kubelet[2018]: I0906 00:07:28.505138 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-xtables-lock\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.506985 kubelet[2018]: I0906 00:07:28.505396 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:28.506985 kubelet[2018]: I0906 00:07:28.505432 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:28.507590 kubelet[2018]: I0906 00:07:28.507537 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-cgroup\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.508289 kubelet[2018]: I0906 00:07:28.507855 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-etc-cni-netd\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.513858 systemd[1]: var-lib-kubelet-pods-8ee57c0a\x2d94c1\x2d4772\x2d9c1c\x2d2334c469d752-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:07:28.516283 kubelet[2018]: I0906 00:07:28.516227 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ee57c0a-94c1-4772-9c1c-2334c469d752-hubble-tls\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.516458 kubelet[2018]: I0906 00:07:28.516300 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-ipsec-secrets\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.516458 kubelet[2018]: I0906 00:07:28.516341 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-run\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.516458 kubelet[2018]: I0906 00:07:28.516379 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-lib-modules\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.516458 kubelet[2018]: I0906 00:07:28.516417 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-config-path\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.516705 kubelet[2018]: I0906 00:07:28.516458 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56k7p\" (UniqueName: \"kubernetes.io/projected/8ee57c0a-94c1-4772-9c1c-2334c469d752-kube-api-access-56k7p\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.516705 kubelet[2018]: I0906 00:07:28.516496 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-bpf-maps\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.516705 kubelet[2018]: I0906 00:07:28.516534 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-hostproc\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.516705 kubelet[2018]: I0906 00:07:28.516571 2018 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-cni-path\") pod \"8ee57c0a-94c1-4772-9c1c-2334c469d752\" (UID: \"8ee57c0a-94c1-4772-9c1c-2334c469d752\") " Sep 6 00:07:28.516705 kubelet[2018]: I0906 00:07:28.516643 2018 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-host-proc-sys-kernel\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.516705 kubelet[2018]: I0906 00:07:28.516669 2018 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-host-proc-sys-net\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.516705 kubelet[2018]: I0906 00:07:28.516693 2018 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-xtables-lock\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.517216 kubelet[2018]: I0906 00:07:28.507625 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:28.517216 kubelet[2018]: I0906 00:07:28.508135 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:28.517216 kubelet[2018]: I0906 00:07:28.516756 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-cni-path" (OuterVolumeSpecName: "cni-path") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:28.521444 kubelet[2018]: I0906 00:07:28.521352 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:28.521630 kubelet[2018]: I0906 00:07:28.521492 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:28.521705 kubelet[2018]: I0906 00:07:28.521674 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee57c0a-94c1-4772-9c1c-2334c469d752-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:07:28.528753 kubelet[2018]: I0906 00:07:28.526232 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:28.534054 kubelet[2018]: I0906 00:07:28.527061 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-hostproc" (OuterVolumeSpecName: "hostproc") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:07:28.536522 kubelet[2018]: I0906 00:07:28.536467 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:07:28.543325 systemd[1]: var-lib-kubelet-pods-8ee57c0a\x2d94c1\x2d4772\x2d9c1c\x2d2334c469d752-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:07:28.547366 kubelet[2018]: I0906 00:07:28.547298 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:07:28.551972 kubelet[2018]: I0906 00:07:28.551587 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ee57c0a-94c1-4772-9c1c-2334c469d752-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:07:28.553539 kubelet[2018]: I0906 00:07:28.553427 2018 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ee57c0a-94c1-4772-9c1c-2334c469d752-kube-api-access-56k7p" (OuterVolumeSpecName: "kube-api-access-56k7p") pod "8ee57c0a-94c1-4772-9c1c-2334c469d752" (UID: "8ee57c0a-94c1-4772-9c1c-2334c469d752"). InnerVolumeSpecName "kube-api-access-56k7p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:07:28.617551 kubelet[2018]: I0906 00:07:28.617485 2018 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-hostproc\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.617551 kubelet[2018]: I0906 00:07:28.617545 2018 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-cni-path\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.617551 kubelet[2018]: I0906 00:07:28.617570 2018 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-56k7p\" (UniqueName: \"kubernetes.io/projected/8ee57c0a-94c1-4772-9c1c-2334c469d752-kube-api-access-56k7p\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.618260 kubelet[2018]: I0906 00:07:28.617596 2018 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-bpf-maps\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.618260 kubelet[2018]: I0906 00:07:28.617618 2018 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ee57c0a-94c1-4772-9c1c-2334c469d752-clustermesh-secrets\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.618260 kubelet[2018]: I0906 00:07:28.617639 2018 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-etc-cni-netd\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.618260 kubelet[2018]: I0906 00:07:28.617660 2018 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ee57c0a-94c1-4772-9c1c-2334c469d752-hubble-tls\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.618260 kubelet[2018]: I0906 00:07:28.617681 2018 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-cgroup\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.618260 kubelet[2018]: I0906 00:07:28.617701 2018 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-lib-modules\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.618260 kubelet[2018]: I0906 00:07:28.617760 2018 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-config-path\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.618260 kubelet[2018]: I0906 00:07:28.617784 2018 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-ipsec-secrets\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.618260 kubelet[2018]: I0906 00:07:28.617806 2018 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ee57c0a-94c1-4772-9c1c-2334c469d752-cilium-run\") on node \"172.31.27.196\" DevicePath \"\"" Sep 6 00:07:28.711759 kubelet[2018]: E0906 00:07:28.711681 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:29.021595 systemd[1]: Removed slice kubepods-burstable-pod8ee57c0a_94c1_4772_9c1c_2334c469d752.slice. Sep 6 00:07:29.111250 systemd[1]: var-lib-kubelet-pods-8ee57c0a\x2d94c1\x2d4772\x2d9c1c\x2d2334c469d752-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d56k7p.mount: Deactivated successfully. Sep 6 00:07:29.111440 systemd[1]: var-lib-kubelet-pods-8ee57c0a\x2d94c1\x2d4772\x2d9c1c\x2d2334c469d752-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:07:29.290581 kubelet[2018]: I0906 00:07:29.289281 2018 scope.go:117] "RemoveContainer" containerID="8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60" Sep 6 00:07:29.296108 env[1647]: time="2025-09-06T00:07:29.296051601Z" level=info msg="RemoveContainer for \"8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60\"" Sep 6 00:07:29.305067 env[1647]: time="2025-09-06T00:07:29.305005579Z" level=info msg="RemoveContainer for \"8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60\" returns successfully" Sep 6 00:07:29.357997 kubelet[2018]: I0906 00:07:29.357933 2018 memory_manager.go:355] "RemoveStaleState removing state" podUID="8ee57c0a-94c1-4772-9c1c-2334c469d752" containerName="mount-cgroup" Sep 6 00:07:29.370464 systemd[1]: Created slice kubepods-burstable-pod2d6de0a7_3f13_46c6_a5a9_1bea65bfed62.slice. Sep 6 00:07:29.423367 kubelet[2018]: I0906 00:07:29.423260 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-lib-modules\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.423575 kubelet[2018]: I0906 00:07:29.423470 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-cilium-config-path\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.423575 kubelet[2018]: I0906 00:07:29.423519 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-cilium-cgroup\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.423710 kubelet[2018]: I0906 00:07:29.423584 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-xtables-lock\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.423710 kubelet[2018]: I0906 00:07:29.423623 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-clustermesh-secrets\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.423844 kubelet[2018]: I0906 00:07:29.423751 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-hostproc\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.423844 kubelet[2018]: I0906 00:07:29.423794 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-host-proc-sys-kernel\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.424196 kubelet[2018]: I0906 00:07:29.423868 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-hubble-tls\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.424196 kubelet[2018]: I0906 00:07:29.423949 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4f6w\" (UniqueName: \"kubernetes.io/projected/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-kube-api-access-m4f6w\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.424196 kubelet[2018]: I0906 00:07:29.424018 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-cni-path\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.424196 kubelet[2018]: I0906 00:07:29.424062 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-cilium-ipsec-secrets\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.424196 kubelet[2018]: I0906 00:07:29.424128 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-bpf-maps\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.424510 kubelet[2018]: I0906 00:07:29.424221 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-etc-cni-netd\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.424510 kubelet[2018]: I0906 00:07:29.424288 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-host-proc-sys-net\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.424510 kubelet[2018]: I0906 00:07:29.424330 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d6de0a7-3f13-46c6-a5a9-1bea65bfed62-cilium-run\") pod \"cilium-l657l\" (UID: \"2d6de0a7-3f13-46c6-a5a9-1bea65bfed62\") " pod="kube-system/cilium-l657l" Sep 6 00:07:29.672556 env[1647]: time="2025-09-06T00:07:29.670984472Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:29.677771 env[1647]: time="2025-09-06T00:07:29.677708433Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:29.681549 env[1647]: time="2025-09-06T00:07:29.681469037Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:29.682827 env[1647]: time="2025-09-06T00:07:29.682767197Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 6 00:07:29.684245 env[1647]: time="2025-09-06T00:07:29.684166115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l657l,Uid:2d6de0a7-3f13-46c6-a5a9-1bea65bfed62,Namespace:kube-system,Attempt:0,}" Sep 6 00:07:29.688105 env[1647]: time="2025-09-06T00:07:29.688044888Z" level=info msg="CreateContainer within sandbox \"97ced395d6a5125bfda5e89f5fae55bb6a4c6f880d9e09ceaeb8b8d6b50ee865\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:07:29.712135 kubelet[2018]: E0906 00:07:29.712061 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:29.723436 env[1647]: time="2025-09-06T00:07:29.723359751Z" level=info msg="CreateContainer within sandbox \"97ced395d6a5125bfda5e89f5fae55bb6a4c6f880d9e09ceaeb8b8d6b50ee865\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2262de4b811a3d18c55a31d40332c37b22ad3ac27345025f77b44fd027d2813b\"" Sep 6 00:07:29.724803 env[1647]: time="2025-09-06T00:07:29.724725043Z" level=info msg="StartContainer for \"2262de4b811a3d18c55a31d40332c37b22ad3ac27345025f77b44fd027d2813b\"" Sep 6 00:07:29.738960 env[1647]: time="2025-09-06T00:07:29.738748057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:07:29.738960 env[1647]: time="2025-09-06T00:07:29.738848430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:07:29.738960 env[1647]: time="2025-09-06T00:07:29.738876439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:07:29.739850 env[1647]: time="2025-09-06T00:07:29.739689837Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/57dd7928124e6dc81c794c1b11509e52bc115a2c5ad052ae5bbe2478edb18d21 pid=3894 runtime=io.containerd.runc.v2 Sep 6 00:07:29.771976 systemd[1]: Started cri-containerd-2262de4b811a3d18c55a31d40332c37b22ad3ac27345025f77b44fd027d2813b.scope. Sep 6 00:07:29.783231 systemd[1]: Started cri-containerd-57dd7928124e6dc81c794c1b11509e52bc115a2c5ad052ae5bbe2478edb18d21.scope. Sep 6 00:07:29.869702 env[1647]: time="2025-09-06T00:07:29.869548833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l657l,Uid:2d6de0a7-3f13-46c6-a5a9-1bea65bfed62,Namespace:kube-system,Attempt:0,} returns sandbox id \"57dd7928124e6dc81c794c1b11509e52bc115a2c5ad052ae5bbe2478edb18d21\"" Sep 6 00:07:29.870822 env[1647]: time="2025-09-06T00:07:29.870734836Z" level=info msg="StartContainer for \"2262de4b811a3d18c55a31d40332c37b22ad3ac27345025f77b44fd027d2813b\" returns successfully" Sep 6 00:07:29.877555 env[1647]: time="2025-09-06T00:07:29.877469862Z" level=info msg="CreateContainer within sandbox \"57dd7928124e6dc81c794c1b11509e52bc115a2c5ad052ae5bbe2478edb18d21\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:07:29.878033 kubelet[2018]: E0906 00:07:29.877956 2018 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:07:29.907045 env[1647]: time="2025-09-06T00:07:29.906864142Z" level=info msg="CreateContainer within sandbox \"57dd7928124e6dc81c794c1b11509e52bc115a2c5ad052ae5bbe2478edb18d21\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e8d26c7e6e0380070c260030a61458bc2bfda023298078456bfa3184d6ab5989\"" Sep 6 00:07:29.908263 env[1647]: time="2025-09-06T00:07:29.908115572Z" level=info msg="StartContainer for \"e8d26c7e6e0380070c260030a61458bc2bfda023298078456bfa3184d6ab5989\"" Sep 6 00:07:29.939029 systemd[1]: Started cri-containerd-e8d26c7e6e0380070c260030a61458bc2bfda023298078456bfa3184d6ab5989.scope. Sep 6 00:07:30.031306 env[1647]: time="2025-09-06T00:07:30.031230935Z" level=info msg="StartContainer for \"e8d26c7e6e0380070c260030a61458bc2bfda023298078456bfa3184d6ab5989\" returns successfully" Sep 6 00:07:30.058055 systemd[1]: cri-containerd-e8d26c7e6e0380070c260030a61458bc2bfda023298078456bfa3184d6ab5989.scope: Deactivated successfully. Sep 6 00:07:30.193101 env[1647]: time="2025-09-06T00:07:30.192931909Z" level=info msg="shim disconnected" id=e8d26c7e6e0380070c260030a61458bc2bfda023298078456bfa3184d6ab5989 Sep 6 00:07:30.193652 env[1647]: time="2025-09-06T00:07:30.193595335Z" level=warning msg="cleaning up after shim disconnected" id=e8d26c7e6e0380070c260030a61458bc2bfda023298078456bfa3184d6ab5989 namespace=k8s.io Sep 6 00:07:30.193846 env[1647]: time="2025-09-06T00:07:30.193813121Z" level=info msg="cleaning up dead shim" Sep 6 00:07:30.217403 env[1647]: time="2025-09-06T00:07:30.217339428Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4008 runtime=io.containerd.runc.v2\n" Sep 6 00:07:30.312078 env[1647]: time="2025-09-06T00:07:30.311788720Z" level=info msg="CreateContainer within sandbox \"57dd7928124e6dc81c794c1b11509e52bc115a2c5ad052ae5bbe2478edb18d21\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:07:30.352099 env[1647]: time="2025-09-06T00:07:30.351995521Z" level=info msg="CreateContainer within sandbox \"57dd7928124e6dc81c794c1b11509e52bc115a2c5ad052ae5bbe2478edb18d21\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f3a6658d47eb7e8bb509cd0be579763a070ff12ed0e810dd4a306beff6e2adb8\"" Sep 6 00:07:30.353109 env[1647]: time="2025-09-06T00:07:30.353043025Z" level=info msg="StartContainer for \"f3a6658d47eb7e8bb509cd0be579763a070ff12ed0e810dd4a306beff6e2adb8\"" Sep 6 00:07:30.376587 kubelet[2018]: I0906 00:07:30.376472 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-kfs2z" podStartSLOduration=2.124108511 podStartE2EDuration="4.376440421s" podCreationTimestamp="2025-09-06 00:07:26 +0000 UTC" firstStartedPulling="2025-09-06 00:07:27.432800034 +0000 UTC m=+74.656169798" lastFinishedPulling="2025-09-06 00:07:29.685131944 +0000 UTC m=+76.908501708" observedRunningTime="2025-09-06 00:07:30.323268162 +0000 UTC m=+77.546637950" watchObservedRunningTime="2025-09-06 00:07:30.376440421 +0000 UTC m=+77.599810197" Sep 6 00:07:30.385839 systemd[1]: Started cri-containerd-f3a6658d47eb7e8bb509cd0be579763a070ff12ed0e810dd4a306beff6e2adb8.scope. Sep 6 00:07:30.467877 env[1647]: time="2025-09-06T00:07:30.467668093Z" level=info msg="StartContainer for \"f3a6658d47eb7e8bb509cd0be579763a070ff12ed0e810dd4a306beff6e2adb8\" returns successfully" Sep 6 00:07:30.481017 systemd[1]: cri-containerd-f3a6658d47eb7e8bb509cd0be579763a070ff12ed0e810dd4a306beff6e2adb8.scope: Deactivated successfully. Sep 6 00:07:30.524917 env[1647]: time="2025-09-06T00:07:30.524810880Z" level=info msg="shim disconnected" id=f3a6658d47eb7e8bb509cd0be579763a070ff12ed0e810dd4a306beff6e2adb8 Sep 6 00:07:30.524917 env[1647]: time="2025-09-06T00:07:30.524905252Z" level=warning msg="cleaning up after shim disconnected" id=f3a6658d47eb7e8bb509cd0be579763a070ff12ed0e810dd4a306beff6e2adb8 namespace=k8s.io Sep 6 00:07:30.525261 env[1647]: time="2025-09-06T00:07:30.524930333Z" level=info msg="cleaning up dead shim" Sep 6 00:07:30.540834 env[1647]: time="2025-09-06T00:07:30.540761032Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4068 runtime=io.containerd.runc.v2\n" Sep 6 00:07:30.686971 kubelet[2018]: W0906 00:07:30.685095 2018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ee57c0a_94c1_4772_9c1c_2334c469d752.slice/cri-containerd-8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60.scope WatchSource:0}: container "8a674f7fac6530d7682eff1bc5d69827124e3b711fb79b07d28088f441abba60" in namespace "k8s.io": not found Sep 6 00:07:30.712929 kubelet[2018]: E0906 00:07:30.712840 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:31.014247 kubelet[2018]: I0906 00:07:31.014165 2018 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ee57c0a-94c1-4772-9c1c-2334c469d752" path="/var/lib/kubelet/pods/8ee57c0a-94c1-4772-9c1c-2334c469d752/volumes" Sep 6 00:07:31.113033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3a6658d47eb7e8bb509cd0be579763a070ff12ed0e810dd4a306beff6e2adb8-rootfs.mount: Deactivated successfully. Sep 6 00:07:31.320273 env[1647]: time="2025-09-06T00:07:31.319733387Z" level=info msg="CreateContainer within sandbox \"57dd7928124e6dc81c794c1b11509e52bc115a2c5ad052ae5bbe2478edb18d21\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:07:31.352183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376272606.mount: Deactivated successfully. Sep 6 00:07:31.370239 env[1647]: time="2025-09-06T00:07:31.370170853Z" level=info msg="CreateContainer within sandbox \"57dd7928124e6dc81c794c1b11509e52bc115a2c5ad052ae5bbe2478edb18d21\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"257eb8de23507f55536ca6aedef508226637cc05a66fb4db503d0aff396c4638\"" Sep 6 00:07:31.371406 env[1647]: time="2025-09-06T00:07:31.371346391Z" level=info msg="StartContainer for \"257eb8de23507f55536ca6aedef508226637cc05a66fb4db503d0aff396c4638\"" Sep 6 00:07:31.414266 systemd[1]: Started cri-containerd-257eb8de23507f55536ca6aedef508226637cc05a66fb4db503d0aff396c4638.scope. Sep 6 00:07:31.497076 systemd[1]: cri-containerd-257eb8de23507f55536ca6aedef508226637cc05a66fb4db503d0aff396c4638.scope: Deactivated successfully. Sep 6 00:07:31.499282 env[1647]: time="2025-09-06T00:07:31.499212188Z" level=info msg="StartContainer for \"257eb8de23507f55536ca6aedef508226637cc05a66fb4db503d0aff396c4638\" returns successfully" Sep 6 00:07:31.543784 env[1647]: time="2025-09-06T00:07:31.543703021Z" level=info msg="shim disconnected" id=257eb8de23507f55536ca6aedef508226637cc05a66fb4db503d0aff396c4638 Sep 6 00:07:31.544165 env[1647]: time="2025-09-06T00:07:31.543786173Z" level=warning msg="cleaning up after shim disconnected" id=257eb8de23507f55536ca6aedef508226637cc05a66fb4db503d0aff396c4638 namespace=k8s.io Sep 6 00:07:31.544165 env[1647]: time="2025-09-06T00:07:31.543819163Z" level=info msg="cleaning up dead shim" Sep 6 00:07:31.559877 env[1647]: time="2025-09-06T00:07:31.559726706Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4127 runtime=io.containerd.runc.v2\n" Sep 6 00:07:31.713699 kubelet[2018]: E0906 00:07:31.713481 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:32.324400 env[1647]: time="2025-09-06T00:07:32.324345229Z" level=info msg="CreateContainer within sandbox \"57dd7928124e6dc81c794c1b11509e52bc115a2c5ad052ae5bbe2478edb18d21\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:07:32.355008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1827181855.mount: Deactivated successfully. Sep 6 00:07:32.367690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount84974305.mount: Deactivated successfully. Sep 6 00:07:32.376702 env[1647]: time="2025-09-06T00:07:32.376618696Z" level=info msg="CreateContainer within sandbox \"57dd7928124e6dc81c794c1b11509e52bc115a2c5ad052ae5bbe2478edb18d21\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"db5f5de01ce67f34534f10124ad4cd0046c2156cb0a2d302086275d1d93bd695\"" Sep 6 00:07:32.378190 env[1647]: time="2025-09-06T00:07:32.378124297Z" level=info msg="StartContainer for \"db5f5de01ce67f34534f10124ad4cd0046c2156cb0a2d302086275d1d93bd695\"" Sep 6 00:07:32.409592 systemd[1]: Started cri-containerd-db5f5de01ce67f34534f10124ad4cd0046c2156cb0a2d302086275d1d93bd695.scope. Sep 6 00:07:32.479283 systemd[1]: cri-containerd-db5f5de01ce67f34534f10124ad4cd0046c2156cb0a2d302086275d1d93bd695.scope: Deactivated successfully. Sep 6 00:07:32.482270 env[1647]: time="2025-09-06T00:07:32.481555792Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d6de0a7_3f13_46c6_a5a9_1bea65bfed62.slice/cri-containerd-db5f5de01ce67f34534f10124ad4cd0046c2156cb0a2d302086275d1d93bd695.scope/memory.events\": no such file or directory" Sep 6 00:07:32.486218 env[1647]: time="2025-09-06T00:07:32.486141932Z" level=info msg="StartContainer for \"db5f5de01ce67f34534f10124ad4cd0046c2156cb0a2d302086275d1d93bd695\" returns successfully" Sep 6 00:07:32.525630 env[1647]: time="2025-09-06T00:07:32.525550384Z" level=info msg="shim disconnected" id=db5f5de01ce67f34534f10124ad4cd0046c2156cb0a2d302086275d1d93bd695 Sep 6 00:07:32.525954 env[1647]: time="2025-09-06T00:07:32.525631088Z" level=warning msg="cleaning up after shim disconnected" id=db5f5de01ce67f34534f10124ad4cd0046c2156cb0a2d302086275d1d93bd695 namespace=k8s.io Sep 6 00:07:32.525954 env[1647]: time="2025-09-06T00:07:32.525656157Z" level=info msg="cleaning up dead shim" Sep 6 00:07:32.538821 env[1647]: time="2025-09-06T00:07:32.538743242Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4186 runtime=io.containerd.runc.v2\n" Sep 6 00:07:32.714720 kubelet[2018]: E0906 00:07:32.714552 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:33.331293 env[1647]: time="2025-09-06T00:07:33.331214739Z" level=info msg="CreateContainer within sandbox \"57dd7928124e6dc81c794c1b11509e52bc115a2c5ad052ae5bbe2478edb18d21\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:07:33.364867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159053872.mount: Deactivated successfully. Sep 6 00:07:33.381616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322981412.mount: Deactivated successfully. Sep 6 00:07:33.385578 env[1647]: time="2025-09-06T00:07:33.385516203Z" level=info msg="CreateContainer within sandbox \"57dd7928124e6dc81c794c1b11509e52bc115a2c5ad052ae5bbe2478edb18d21\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"361b650e30a745e7732240d41d9bfb17f6bc6a688042a7d4fddfe5ea33627871\"" Sep 6 00:07:33.386867 env[1647]: time="2025-09-06T00:07:33.386808758Z" level=info msg="StartContainer for \"361b650e30a745e7732240d41d9bfb17f6bc6a688042a7d4fddfe5ea33627871\"" Sep 6 00:07:33.422247 systemd[1]: Started cri-containerd-361b650e30a745e7732240d41d9bfb17f6bc6a688042a7d4fddfe5ea33627871.scope. Sep 6 00:07:33.512428 env[1647]: time="2025-09-06T00:07:33.512357924Z" level=info msg="StartContainer for \"361b650e30a745e7732240d41d9bfb17f6bc6a688042a7d4fddfe5ea33627871\" returns successfully" Sep 6 00:07:33.715778 kubelet[2018]: E0906 00:07:33.714984 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:33.809571 kubelet[2018]: W0906 00:07:33.809517 2018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d6de0a7_3f13_46c6_a5a9_1bea65bfed62.slice/cri-containerd-e8d26c7e6e0380070c260030a61458bc2bfda023298078456bfa3184d6ab5989.scope WatchSource:0}: task e8d26c7e6e0380070c260030a61458bc2bfda023298078456bfa3184d6ab5989 not found: not found Sep 6 00:07:34.320952 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 6 00:07:34.390588 kubelet[2018]: I0906 00:07:34.390491 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l657l" podStartSLOduration=5.390469751 podStartE2EDuration="5.390469751s" podCreationTimestamp="2025-09-06 00:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:07:34.389624842 +0000 UTC m=+81.612994642" watchObservedRunningTime="2025-09-06 00:07:34.390469751 +0000 UTC m=+81.613839515" Sep 6 00:07:34.653140 kubelet[2018]: E0906 00:07:34.652998 2018 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:34.715489 kubelet[2018]: E0906 00:07:34.715439 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:35.716922 kubelet[2018]: E0906 00:07:35.716786 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:36.718292 kubelet[2018]: E0906 00:07:36.718242 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:36.925417 kubelet[2018]: W0906 00:07:36.918606 2018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d6de0a7_3f13_46c6_a5a9_1bea65bfed62.slice/cri-containerd-f3a6658d47eb7e8bb509cd0be579763a070ff12ed0e810dd4a306beff6e2adb8.scope WatchSource:0}: task f3a6658d47eb7e8bb509cd0be579763a070ff12ed0e810dd4a306beff6e2adb8 not found: not found Sep 6 00:07:37.719356 kubelet[2018]: E0906 00:07:37.719297 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:38.503428 systemd[1]: run-containerd-runc-k8s.io-361b650e30a745e7732240d41d9bfb17f6bc6a688042a7d4fddfe5ea33627871-runc.W6OiZU.mount: Deactivated successfully. Sep 6 00:07:38.533292 systemd-networkd[1367]: lxc_health: Link UP Sep 6 00:07:38.544179 (udev-worker)[4741]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:07:38.560099 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:07:38.556779 systemd-networkd[1367]: lxc_health: Gained carrier Sep 6 00:07:38.720324 kubelet[2018]: E0906 00:07:38.720226 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:39.722692 kubelet[2018]: E0906 00:07:39.722617 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:40.059174 kubelet[2018]: W0906 00:07:40.050620 2018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d6de0a7_3f13_46c6_a5a9_1bea65bfed62.slice/cri-containerd-257eb8de23507f55536ca6aedef508226637cc05a66fb4db503d0aff396c4638.scope WatchSource:0}: task 257eb8de23507f55536ca6aedef508226637cc05a66fb4db503d0aff396c4638 not found: not found Sep 6 00:07:40.553176 systemd-networkd[1367]: lxc_health: Gained IPv6LL Sep 6 00:07:40.723485 kubelet[2018]: E0906 00:07:40.723400 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:40.849388 systemd[1]: run-containerd-runc-k8s.io-361b650e30a745e7732240d41d9bfb17f6bc6a688042a7d4fddfe5ea33627871-runc.cWy4sU.mount: Deactivated successfully. Sep 6 00:07:41.724119 kubelet[2018]: E0906 00:07:41.724006 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:42.724348 kubelet[2018]: E0906 00:07:42.724270 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:43.169207 kubelet[2018]: W0906 00:07:43.169144 2018 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d6de0a7_3f13_46c6_a5a9_1bea65bfed62.slice/cri-containerd-db5f5de01ce67f34534f10124ad4cd0046c2156cb0a2d302086275d1d93bd695.scope WatchSource:0}: task db5f5de01ce67f34534f10124ad4cd0046c2156cb0a2d302086275d1d93bd695 not found: not found Sep 6 00:07:43.255227 systemd[1]: run-containerd-runc-k8s.io-361b650e30a745e7732240d41d9bfb17f6bc6a688042a7d4fddfe5ea33627871-runc.8IOXWY.mount: Deactivated successfully. Sep 6 00:07:43.725266 kubelet[2018]: E0906 00:07:43.725216 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:44.726913 kubelet[2018]: E0906 00:07:44.726830 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:45.579812 systemd[1]: run-containerd-runc-k8s.io-361b650e30a745e7732240d41d9bfb17f6bc6a688042a7d4fddfe5ea33627871-runc.3Cj9JO.mount: Deactivated successfully. Sep 6 00:07:45.728422 kubelet[2018]: E0906 00:07:45.728328 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:46.728873 kubelet[2018]: E0906 00:07:46.728814 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:47.730300 kubelet[2018]: E0906 00:07:47.730245 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:48.731670 kubelet[2018]: E0906 00:07:48.731615 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:49.732983 kubelet[2018]: E0906 00:07:49.732927 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:50.734078 kubelet[2018]: E0906 00:07:50.734024 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:51.735445 kubelet[2018]: E0906 00:07:51.735392 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:52.736991 kubelet[2018]: E0906 00:07:52.736864 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:53.737504 kubelet[2018]: E0906 00:07:53.737426 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:54.652914 kubelet[2018]: E0906 00:07:54.652835 2018 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:54.738490 kubelet[2018]: E0906 00:07:54.738408 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:55.738977 kubelet[2018]: E0906 00:07:55.738928 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:56.740231 kubelet[2018]: E0906 00:07:56.740128 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:57.741156 kubelet[2018]: E0906 00:07:57.741082 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:58.742191 kubelet[2018]: E0906 00:07:58.742140 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:59.743684 kubelet[2018]: E0906 00:07:59.743631 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:00.745546 kubelet[2018]: E0906 00:08:00.745467 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:01.745726 kubelet[2018]: E0906 00:08:01.745648 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:02.746715 kubelet[2018]: E0906 00:08:02.746664 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:03.748487 kubelet[2018]: E0906 00:08:03.748438 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:04.749880 kubelet[2018]: E0906 00:08:04.749828 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:05.751039 kubelet[2018]: E0906 00:08:05.750965 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:06.332738 systemd[1]: cri-containerd-2262de4b811a3d18c55a31d40332c37b22ad3ac27345025f77b44fd027d2813b.scope: Deactivated successfully. Sep 6 00:08:06.368845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2262de4b811a3d18c55a31d40332c37b22ad3ac27345025f77b44fd027d2813b-rootfs.mount: Deactivated successfully. Sep 6 00:08:06.379857 env[1647]: time="2025-09-06T00:08:06.379792136Z" level=info msg="shim disconnected" id=2262de4b811a3d18c55a31d40332c37b22ad3ac27345025f77b44fd027d2813b Sep 6 00:08:06.380861 env[1647]: time="2025-09-06T00:08:06.380777939Z" level=warning msg="cleaning up after shim disconnected" id=2262de4b811a3d18c55a31d40332c37b22ad3ac27345025f77b44fd027d2813b namespace=k8s.io Sep 6 00:08:06.380861 env[1647]: time="2025-09-06T00:08:06.380840159Z" level=info msg="cleaning up dead shim" Sep 6 00:08:06.395295 env[1647]: time="2025-09-06T00:08:06.395230111Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:08:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4872 runtime=io.containerd.runc.v2\n" Sep 6 00:08:06.435671 kubelet[2018]: I0906 00:08:06.435313 2018 scope.go:117] "RemoveContainer" containerID="2262de4b811a3d18c55a31d40332c37b22ad3ac27345025f77b44fd027d2813b" Sep 6 00:08:06.438839 env[1647]: time="2025-09-06T00:08:06.438783055Z" level=info msg="CreateContainer within sandbox \"97ced395d6a5125bfda5e89f5fae55bb6a4c6f880d9e09ceaeb8b8d6b50ee865\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Sep 6 00:08:06.471098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171217128.mount: Deactivated successfully. Sep 6 00:08:06.483595 env[1647]: time="2025-09-06T00:08:06.483524099Z" level=info msg="CreateContainer within sandbox \"97ced395d6a5125bfda5e89f5fae55bb6a4c6f880d9e09ceaeb8b8d6b50ee865\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"557312a0cfdb21f9f3983ab0aed37ffeda6b37dde365d15d15b2454d5856d42e\"" Sep 6 00:08:06.484532 env[1647]: time="2025-09-06T00:08:06.484486070Z" level=info msg="StartContainer for \"557312a0cfdb21f9f3983ab0aed37ffeda6b37dde365d15d15b2454d5856d42e\"" Sep 6 00:08:06.531800 systemd[1]: Started cri-containerd-557312a0cfdb21f9f3983ab0aed37ffeda6b37dde365d15d15b2454d5856d42e.scope. Sep 6 00:08:06.613533 env[1647]: time="2025-09-06T00:08:06.613246665Z" level=info msg="StartContainer for \"557312a0cfdb21f9f3983ab0aed37ffeda6b37dde365d15d15b2454d5856d42e\" returns successfully" Sep 6 00:08:06.752210 kubelet[2018]: E0906 00:08:06.752045 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:06.756005 kubelet[2018]: E0906 00:08:06.755546 2018 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.196?timeout=10s\": context deadline exceeded" Sep 6 00:08:07.752855 kubelet[2018]: E0906 00:08:07.752806 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:08.754130 kubelet[2018]: E0906 00:08:08.754083 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:09.755962 kubelet[2018]: E0906 00:08:09.755872 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:10.756923 kubelet[2018]: E0906 00:08:10.756811 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"