Sep 9 00:25:49.715677 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 00:25:49.715727 kernel: Linux version 5.15.191-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Sep 8 23:23:23 -00 2025 Sep 9 00:25:49.715736 kernel: efi: EFI v2.70 by EDK II Sep 9 00:25:49.715742 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 9 00:25:49.715747 kernel: random: crng init done Sep 9 00:25:49.715753 kernel: ACPI: Early table checksum verification disabled Sep 9 00:25:49.715759 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 9 00:25:49.715766 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:25:49.715772 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:49.715777 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:49.715782 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:49.715788 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:49.715793 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:49.715799 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:49.715808 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:49.715814 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:49.715820 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:49.715827 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 00:25:49.715833 kernel: NUMA: Failed to initialise from firmware Sep 9 00:25:49.715839 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:25:49.715845 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Sep 9 00:25:49.715850 kernel: Zone ranges: Sep 9 00:25:49.715856 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:25:49.715863 kernel: DMA32 empty Sep 9 00:25:49.715869 kernel: Normal empty Sep 9 00:25:49.715874 kernel: Movable zone start for each node Sep 9 00:25:49.715880 kernel: Early memory node ranges Sep 9 00:25:49.715886 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 9 00:25:49.715892 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 9 00:25:49.715897 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 9 00:25:49.715910 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 9 00:25:49.715921 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 9 00:25:49.715927 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 9 00:25:49.715932 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 9 00:25:49.715939 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:25:49.715946 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 00:25:49.715951 kernel: psci: probing for conduit method from ACPI. Sep 9 00:25:49.715957 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 00:25:49.715963 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 00:25:49.715969 kernel: psci: Trusted OS migration not required Sep 9 00:25:49.715978 kernel: psci: SMC Calling Convention v1.1 Sep 9 00:25:49.715984 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 00:25:49.715992 kernel: ACPI: SRAT not present Sep 9 00:25:49.715998 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 9 00:25:49.716005 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 9 00:25:49.716011 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 00:25:49.716017 kernel: Detected PIPT I-cache on CPU0 Sep 9 00:25:49.716023 kernel: CPU features: detected: GIC system register CPU interface Sep 9 00:25:49.716029 kernel: CPU features: detected: Hardware dirty bit management Sep 9 00:25:49.716036 kernel: CPU features: detected: Spectre-v4 Sep 9 00:25:49.716042 kernel: CPU features: detected: Spectre-BHB Sep 9 00:25:49.716049 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 00:25:49.716055 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 00:25:49.716062 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 00:25:49.716068 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 00:25:49.716074 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 00:25:49.716080 kernel: Policy zone: DMA Sep 9 00:25:49.716087 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:25:49.716094 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:25:49.716100 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:25:49.716106 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:25:49.716113 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:25:49.716120 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Sep 9 00:25:49.716127 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:25:49.716133 kernel: trace event string verifier disabled Sep 9 00:25:49.716140 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:25:49.716147 kernel: rcu: RCU event tracing is enabled. Sep 9 00:25:49.716153 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:25:49.716160 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:25:49.716166 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:25:49.716172 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:25:49.716179 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:25:49.716185 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 00:25:49.716192 kernel: GICv3: 256 SPIs implemented Sep 9 00:25:49.716199 kernel: GICv3: 0 Extended SPIs implemented Sep 9 00:25:49.716205 kernel: GICv3: Distributor has no Range Selector support Sep 9 00:25:49.716211 kernel: Root IRQ handler: gic_handle_irq Sep 9 00:25:49.716217 kernel: GICv3: 16 PPIs implemented Sep 9 00:25:49.716223 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 00:25:49.716250 kernel: ACPI: SRAT not present Sep 9 00:25:49.716257 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 00:25:49.716263 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 00:25:49.716270 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 9 00:25:49.716277 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 9 00:25:49.716283 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 9 00:25:49.716290 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:25:49.716297 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 00:25:49.716303 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 00:25:49.716310 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 00:25:49.716316 kernel: arm-pv: using stolen time PV Sep 9 00:25:49.716322 kernel: Console: colour dummy device 80x25 Sep 9 00:25:49.716328 kernel: ACPI: Core revision 20210730 Sep 9 00:25:49.716335 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 00:25:49.716342 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:25:49.716348 kernel: LSM: Security Framework initializing Sep 9 00:25:49.716356 kernel: SELinux: Initializing. Sep 9 00:25:49.716362 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:25:49.716369 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:25:49.716375 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:25:49.716382 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 00:25:49.716388 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 00:25:49.716394 kernel: Remapping and enabling EFI services. Sep 9 00:25:49.716401 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:25:49.716407 kernel: Detected PIPT I-cache on CPU1 Sep 9 00:25:49.716414 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 00:25:49.716421 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 9 00:25:49.716427 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:25:49.716434 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 00:25:49.716440 kernel: Detected PIPT I-cache on CPU2 Sep 9 00:25:49.716447 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 00:25:49.716453 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 9 00:25:49.716460 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:25:49.716467 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 00:25:49.716473 kernel: Detected PIPT I-cache on CPU3 Sep 9 00:25:49.716480 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 00:25:49.716487 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 9 00:25:49.716493 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:25:49.716500 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 00:25:49.716511 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:25:49.716519 kernel: SMP: Total of 4 processors activated. Sep 9 00:25:49.716526 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 00:25:49.716533 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 00:25:49.716540 kernel: CPU features: detected: Common not Private translations Sep 9 00:25:49.716547 kernel: CPU features: detected: CRC32 instructions Sep 9 00:25:49.716568 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 00:25:49.716575 kernel: CPU features: detected: LSE atomic instructions Sep 9 00:25:49.716584 kernel: CPU features: detected: Privileged Access Never Sep 9 00:25:49.716590 kernel: CPU features: detected: RAS Extension Support Sep 9 00:25:49.716597 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 00:25:49.716603 kernel: CPU: All CPU(s) started at EL1 Sep 9 00:25:49.716610 kernel: alternatives: patching kernel code Sep 9 00:25:49.716619 kernel: devtmpfs: initialized Sep 9 00:25:49.716625 kernel: KASLR enabled Sep 9 00:25:49.716633 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:25:49.716640 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:25:49.716646 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:25:49.716653 kernel: SMBIOS 3.0.0 present. Sep 9 00:25:49.716660 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 9 00:25:49.716666 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:25:49.716673 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 00:25:49.716682 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 00:25:49.716693 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 00:25:49.716701 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:25:49.716707 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Sep 9 00:25:49.716714 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:25:49.716721 kernel: cpuidle: using governor menu Sep 9 00:25:49.716728 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 00:25:49.716735 kernel: ASID allocator initialised with 32768 entries Sep 9 00:25:49.716741 kernel: ACPI: bus type PCI registered Sep 9 00:25:49.716749 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:25:49.716755 kernel: Serial: AMBA PL011 UART driver Sep 9 00:25:49.716762 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:25:49.716769 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 00:25:49.716775 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:25:49.716782 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 00:25:49.716789 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:25:49.716795 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 00:25:49.716802 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:25:49.716810 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:25:49.716817 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:25:49.716823 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 9 00:25:49.716830 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 9 00:25:49.716837 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 9 00:25:49.716843 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:25:49.716850 kernel: ACPI: Interpreter enabled Sep 9 00:25:49.716857 kernel: ACPI: Using GIC for interrupt routing Sep 9 00:25:49.716864 kernel: ACPI: MCFG table detected, 1 entries Sep 9 00:25:49.716872 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 00:25:49.716879 kernel: printk: console [ttyAMA0] enabled Sep 9 00:25:49.716885 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:25:49.717029 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:25:49.717095 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 00:25:49.717156 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 00:25:49.717216 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 00:25:49.717286 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 00:25:49.717295 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 00:25:49.717302 kernel: PCI host bridge to bus 0000:00 Sep 9 00:25:49.717370 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 00:25:49.717432 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 00:25:49.717488 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 00:25:49.717542 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:25:49.717630 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 00:25:49.717711 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:25:49.717776 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 00:25:49.717838 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 00:25:49.717898 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:25:49.717959 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:25:49.718020 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 00:25:49.718082 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 00:25:49.718137 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 00:25:49.718190 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 00:25:49.718245 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 00:25:49.718254 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 00:25:49.718261 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 00:25:49.718268 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 00:25:49.718274 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 00:25:49.718283 kernel: iommu: Default domain type: Translated Sep 9 00:25:49.718290 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 00:25:49.718296 kernel: vgaarb: loaded Sep 9 00:25:49.718303 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 9 00:25:49.718310 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 9 00:25:49.718316 kernel: PTP clock support registered Sep 9 00:25:49.718323 kernel: Registered efivars operations Sep 9 00:25:49.718330 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 00:25:49.718337 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:25:49.718345 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:25:49.718352 kernel: pnp: PnP ACPI init Sep 9 00:25:49.718433 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 00:25:49.718445 kernel: pnp: PnP ACPI: found 1 devices Sep 9 00:25:49.718453 kernel: NET: Registered PF_INET protocol family Sep 9 00:25:49.718460 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:25:49.718467 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:25:49.718475 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:25:49.718486 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:25:49.718493 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 9 00:25:49.718501 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:25:49.718511 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:25:49.718519 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:25:49.718526 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:25:49.718533 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:25:49.718540 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 00:25:49.718547 kernel: kvm [1]: HYP mode not available Sep 9 00:25:49.718561 kernel: Initialise system trusted keyrings Sep 9 00:25:49.718568 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:25:49.718575 kernel: Key type asymmetric registered Sep 9 00:25:49.718582 kernel: Asymmetric key parser 'x509' registered Sep 9 00:25:49.718589 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 00:25:49.718598 kernel: io scheduler mq-deadline registered Sep 9 00:25:49.718604 kernel: io scheduler kyber registered Sep 9 00:25:49.718611 kernel: io scheduler bfq registered Sep 9 00:25:49.718618 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 00:25:49.718626 kernel: ACPI: button: Power Button [PWRB] Sep 9 00:25:49.718633 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 00:25:49.718780 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 00:25:49.718792 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:25:49.718806 kernel: thunder_xcv, ver 1.0 Sep 9 00:25:49.718813 kernel: thunder_bgx, ver 1.0 Sep 9 00:25:49.718819 kernel: nicpf, ver 1.0 Sep 9 00:25:49.718826 kernel: nicvf, ver 1.0 Sep 9 00:25:49.718912 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 00:25:49.718989 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T00:25:49 UTC (1757377549) Sep 9 00:25:49.718998 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 00:25:49.719005 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:25:49.719014 kernel: Segment Routing with IPv6 Sep 9 00:25:49.719021 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:25:49.719028 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:25:49.719034 kernel: Key type dns_resolver registered Sep 9 00:25:49.719043 kernel: registered taskstats version 1 Sep 9 00:25:49.719051 kernel: Loading compiled-in X.509 certificates Sep 9 00:25:49.719058 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.191-flatcar: 14b3f28443a1a4b809c7c0337ab8c3dc8fdb5252' Sep 9 00:25:49.719066 kernel: Key type .fscrypt registered Sep 9 00:25:49.719073 kernel: Key type fscrypt-provisioning registered Sep 9 00:25:49.719080 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:25:49.719086 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:25:49.719095 kernel: ima: No architecture policies found Sep 9 00:25:49.719102 kernel: clk: Disabling unused clocks Sep 9 00:25:49.719108 kernel: Freeing unused kernel memory: 36416K Sep 9 00:25:49.719116 kernel: Run /init as init process Sep 9 00:25:49.719123 kernel: with arguments: Sep 9 00:25:49.719129 kernel: /init Sep 9 00:25:49.719136 kernel: with environment: Sep 9 00:25:49.719142 kernel: HOME=/ Sep 9 00:25:49.719148 kernel: TERM=linux Sep 9 00:25:49.719155 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:25:49.719164 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:25:49.719174 systemd[1]: Detected virtualization kvm. Sep 9 00:25:49.719181 systemd[1]: Detected architecture arm64. Sep 9 00:25:49.719188 systemd[1]: Running in initrd. Sep 9 00:25:49.719195 systemd[1]: No hostname configured, using default hostname. Sep 9 00:25:49.719202 systemd[1]: Hostname set to . Sep 9 00:25:49.719209 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:25:49.719217 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:25:49.719224 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:25:49.719232 systemd[1]: Reached target cryptsetup.target. Sep 9 00:25:49.719239 systemd[1]: Reached target paths.target. Sep 9 00:25:49.719246 systemd[1]: Reached target slices.target. Sep 9 00:25:49.719253 systemd[1]: Reached target swap.target. Sep 9 00:25:49.719260 systemd[1]: Reached target timers.target. Sep 9 00:25:49.719268 systemd[1]: Listening on iscsid.socket. Sep 9 00:25:49.719275 systemd[1]: Listening on iscsiuio.socket. Sep 9 00:25:49.719284 systemd[1]: Listening on systemd-journald-audit.socket. Sep 9 00:25:49.719291 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 9 00:25:49.719298 systemd[1]: Listening on systemd-journald.socket. Sep 9 00:25:49.719305 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:25:49.719313 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:25:49.719320 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:25:49.719327 systemd[1]: Reached target sockets.target. Sep 9 00:25:49.719334 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:25:49.719341 systemd[1]: Finished network-cleanup.service. Sep 9 00:25:49.719349 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:25:49.719356 systemd[1]: Starting systemd-journald.service... Sep 9 00:25:49.719363 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:25:49.719370 systemd[1]: Starting systemd-resolved.service... Sep 9 00:25:49.719377 systemd[1]: Starting systemd-vconsole-setup.service... Sep 9 00:25:49.719384 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:25:49.719391 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:25:49.719398 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 9 00:25:49.719405 systemd[1]: Finished systemd-vconsole-setup.service. Sep 9 00:25:49.719414 kernel: audit: type=1130 audit(1757377549.716:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.719425 systemd-journald[290]: Journal started Sep 9 00:25:49.719466 systemd-journald[290]: Runtime Journal (/run/log/journal/de7c848c27a341d2b9784ec4643dc5b0) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:25:49.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.718933 systemd-modules-load[291]: Inserted module 'overlay' Sep 9 00:25:49.724996 systemd[1]: Starting dracut-cmdline-ask.service... Sep 9 00:25:49.725719 systemd[1]: Started systemd-journald.service. Sep 9 00:25:49.729520 kernel: audit: type=1130 audit(1757377549.725:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.729567 kernel: audit: type=1130 audit(1757377549.729:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.727919 systemd-resolved[292]: Positive Trust Anchors: Sep 9 00:25:49.727926 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:25:49.727953 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:25:49.728516 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 9 00:25:49.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.733062 systemd-resolved[292]: Defaulting to hostname 'linux'. Sep 9 00:25:49.740888 kernel: audit: type=1130 audit(1757377549.736:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.733967 systemd[1]: Started systemd-resolved.service. Sep 9 00:25:49.740283 systemd[1]: Reached target nss-lookup.target. Sep 9 00:25:49.746574 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:25:49.748260 systemd[1]: Finished dracut-cmdline-ask.service. Sep 9 00:25:49.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.751702 kernel: audit: type=1130 audit(1757377549.748:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.751722 kernel: Bridge firewalling registered Sep 9 00:25:49.752154 systemd-modules-load[291]: Inserted module 'br_netfilter' Sep 9 00:25:49.752187 systemd[1]: Starting dracut-cmdline.service... Sep 9 00:25:49.761226 dracut-cmdline[307]: dracut-dracut-053 Sep 9 00:25:49.763508 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:25:49.767207 kernel: SCSI subsystem initialized Sep 9 00:25:49.771416 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:25:49.771470 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:25:49.771482 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 9 00:25:49.773675 systemd-modules-load[291]: Inserted module 'dm_multipath' Sep 9 00:25:49.774497 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:25:49.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.777768 kernel: audit: type=1130 audit(1757377549.775:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.778480 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:25:49.785649 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:25:49.788744 kernel: audit: type=1130 audit(1757377549.786:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.824719 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:25:49.836710 kernel: iscsi: registered transport (tcp) Sep 9 00:25:49.851733 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:25:49.851771 kernel: QLogic iSCSI HBA Driver Sep 9 00:25:49.887585 systemd[1]: Finished dracut-cmdline.service. Sep 9 00:25:49.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.889268 systemd[1]: Starting dracut-pre-udev.service... Sep 9 00:25:49.892235 kernel: audit: type=1130 audit(1757377549.888:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:49.933728 kernel: raid6: neonx8 gen() 13668 MB/s Sep 9 00:25:49.950714 kernel: raid6: neonx8 xor() 10743 MB/s Sep 9 00:25:49.967711 kernel: raid6: neonx4 gen() 13409 MB/s Sep 9 00:25:49.984712 kernel: raid6: neonx4 xor() 10768 MB/s Sep 9 00:25:50.001712 kernel: raid6: neonx2 gen() 12812 MB/s Sep 9 00:25:50.018723 kernel: raid6: neonx2 xor() 10262 MB/s Sep 9 00:25:50.035716 kernel: raid6: neonx1 gen() 10565 MB/s Sep 9 00:25:50.052728 kernel: raid6: neonx1 xor() 8776 MB/s Sep 9 00:25:50.069723 kernel: raid6: int64x8 gen() 6257 MB/s Sep 9 00:25:50.086710 kernel: raid6: int64x8 xor() 3531 MB/s Sep 9 00:25:50.103717 kernel: raid6: int64x4 gen() 7192 MB/s Sep 9 00:25:50.120719 kernel: raid6: int64x4 xor() 3841 MB/s Sep 9 00:25:50.137724 kernel: raid6: int64x2 gen() 6147 MB/s Sep 9 00:25:50.154718 kernel: raid6: int64x2 xor() 3317 MB/s Sep 9 00:25:50.171720 kernel: raid6: int64x1 gen() 5037 MB/s Sep 9 00:25:50.189042 kernel: raid6: int64x1 xor() 2641 MB/s Sep 9 00:25:50.189088 kernel: raid6: using algorithm neonx8 gen() 13668 MB/s Sep 9 00:25:50.189098 kernel: raid6: .... xor() 10743 MB/s, rmw enabled Sep 9 00:25:50.189107 kernel: raid6: using neon recovery algorithm Sep 9 00:25:50.199919 kernel: xor: measuring software checksum speed Sep 9 00:25:50.199962 kernel: 8regs : 17224 MB/sec Sep 9 00:25:50.200978 kernel: 32regs : 20702 MB/sec Sep 9 00:25:50.200992 kernel: arm64_neon : 26919 MB/sec Sep 9 00:25:50.201001 kernel: xor: using function: arm64_neon (26919 MB/sec) Sep 9 00:25:50.253731 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 9 00:25:50.264356 systemd[1]: Finished dracut-pre-udev.service. Sep 9 00:25:50.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:50.267000 audit: BPF prog-id=7 op=LOAD Sep 9 00:25:50.267716 kernel: audit: type=1130 audit(1757377550.264:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:50.267000 audit: BPF prog-id=8 op=LOAD Sep 9 00:25:50.268180 systemd[1]: Starting systemd-udevd.service... Sep 9 00:25:50.283116 systemd-udevd[489]: Using default interface naming scheme 'v252'. Sep 9 00:25:50.286573 systemd[1]: Started systemd-udevd.service. Sep 9 00:25:50.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:50.288621 systemd[1]: Starting dracut-pre-trigger.service... Sep 9 00:25:50.299356 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Sep 9 00:25:50.327284 systemd[1]: Finished dracut-pre-trigger.service. Sep 9 00:25:50.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:50.328761 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:25:50.365247 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:25:50.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:50.398755 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:25:50.412073 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:25:50.412089 kernel: GPT:9289727 != 19775487 Sep 9 00:25:50.412097 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:25:50.412106 kernel: GPT:9289727 != 19775487 Sep 9 00:25:50.412113 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:25:50.412121 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:25:50.422711 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (544) Sep 9 00:25:50.428905 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 9 00:25:50.431713 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 9 00:25:50.432486 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 9 00:25:50.438883 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 9 00:25:50.442231 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:25:50.443750 systemd[1]: Starting disk-uuid.service... Sep 9 00:25:50.508481 disk-uuid[561]: Primary Header is updated. Sep 9 00:25:50.508481 disk-uuid[561]: Secondary Entries is updated. Sep 9 00:25:50.508481 disk-uuid[561]: Secondary Header is updated. Sep 9 00:25:50.512229 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:25:50.516709 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:25:50.519709 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:25:51.521410 disk-uuid[562]: The operation has completed successfully. Sep 9 00:25:51.522449 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:25:51.545214 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:25:51.545307 systemd[1]: Finished disk-uuid.service. Sep 9 00:25:51.546736 systemd[1]: Starting verity-setup.service... Sep 9 00:25:51.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.558719 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 00:25:51.579141 systemd[1]: Found device dev-mapper-usr.device. Sep 9 00:25:51.580473 systemd[1]: Mounting sysusr-usr.mount... Sep 9 00:25:51.581190 systemd[1]: Finished verity-setup.service. Sep 9 00:25:51.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.626710 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 9 00:25:51.626980 systemd[1]: Mounted sysusr-usr.mount. Sep 9 00:25:51.627646 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 9 00:25:51.628361 systemd[1]: Starting ignition-setup.service... Sep 9 00:25:51.630078 systemd[1]: Starting parse-ip-for-networkd.service... Sep 9 00:25:51.637074 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:25:51.637110 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:25:51.637125 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:25:51.645681 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:25:51.651139 systemd[1]: Finished ignition-setup.service. Sep 9 00:25:51.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.652556 systemd[1]: Starting ignition-fetch-offline.service... Sep 9 00:25:51.697059 ignition[651]: Ignition 2.14.0 Sep 9 00:25:51.697068 ignition[651]: Stage: fetch-offline Sep 9 00:25:51.697104 ignition[651]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:25:51.697114 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:25:51.697243 ignition[651]: parsed url from cmdline: "" Sep 9 00:25:51.697246 ignition[651]: no config URL provided Sep 9 00:25:51.697251 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:25:51.697258 ignition[651]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:25:51.697275 ignition[651]: op(1): [started] loading QEMU firmware config module Sep 9 00:25:51.697281 ignition[651]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:25:51.705441 ignition[651]: op(1): [finished] loading QEMU firmware config module Sep 9 00:25:51.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.708861 systemd[1]: Finished parse-ip-for-networkd.service. Sep 9 00:25:51.709000 audit: BPF prog-id=9 op=LOAD Sep 9 00:25:51.711029 systemd[1]: Starting systemd-networkd.service... Sep 9 00:25:51.714249 ignition[651]: parsing config with SHA512: 7a4f5f36caff22e38312b7200691997fa31e42ddad7ac7bd8300bba46638fd2a6302f7bb2c17123c19afb8b684a8d3d8d0d69004a23cd7c8808b5ccd926c129e Sep 9 00:25:51.719982 unknown[651]: fetched base config from "system" Sep 9 00:25:51.720000 unknown[651]: fetched user config from "qemu" Sep 9 00:25:51.720437 ignition[651]: fetch-offline: fetch-offline passed Sep 9 00:25:51.721429 systemd[1]: Finished ignition-fetch-offline.service. Sep 9 00:25:51.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.720512 ignition[651]: Ignition finished successfully Sep 9 00:25:51.729734 systemd-networkd[740]: lo: Link UP Sep 9 00:25:51.729741 systemd-networkd[740]: lo: Gained carrier Sep 9 00:25:51.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.730135 systemd-networkd[740]: Enumeration completed Sep 9 00:25:51.730203 systemd[1]: Started systemd-networkd.service. Sep 9 00:25:51.730319 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:25:51.731137 systemd[1]: Reached target network.target. Sep 9 00:25:51.731246 systemd-networkd[740]: eth0: Link UP Sep 9 00:25:51.731249 systemd-networkd[740]: eth0: Gained carrier Sep 9 00:25:51.732194 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:25:51.732927 systemd[1]: Starting ignition-kargs.service... Sep 9 00:25:51.734386 systemd[1]: Starting iscsiuio.service... Sep 9 00:25:51.741525 systemd[1]: Started iscsiuio.service. Sep 9 00:25:51.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.743030 systemd[1]: Starting iscsid.service... Sep 9 00:25:51.743427 ignition[742]: Ignition 2.14.0 Sep 9 00:25:51.743433 ignition[742]: Stage: kargs Sep 9 00:25:51.743523 ignition[742]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:25:51.743532 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:25:51.746702 iscsid[751]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:25:51.746702 iscsid[751]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 9 00:25:51.746702 iscsid[751]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 9 00:25:51.746702 iscsid[751]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 9 00:25:51.746702 iscsid[751]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:25:51.746702 iscsid[751]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 9 00:25:51.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.746029 systemd[1]: Finished ignition-kargs.service. Sep 9 00:25:51.744250 ignition[742]: kargs: kargs passed Sep 9 00:25:51.748080 systemd[1]: Starting ignition-disks.service... Sep 9 00:25:51.744290 ignition[742]: Ignition finished successfully Sep 9 00:25:51.749193 systemd[1]: Started iscsid.service. Sep 9 00:25:51.753985 ignition[752]: Ignition 2.14.0 Sep 9 00:25:51.749202 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.40/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:25:51.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.753990 ignition[752]: Stage: disks Sep 9 00:25:51.753996 systemd[1]: Starting dracut-initqueue.service... Sep 9 00:25:51.754072 ignition[752]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:25:51.760141 systemd[1]: Finished ignition-disks.service. Sep 9 00:25:51.754080 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:25:51.761186 systemd[1]: Reached target initrd-root-device.target. Sep 9 00:25:51.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.758978 ignition[752]: disks: disks passed Sep 9 00:25:51.762637 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:25:51.759026 ignition[752]: Ignition finished successfully Sep 9 00:25:51.763594 systemd[1]: Reached target local-fs.target. Sep 9 00:25:51.764647 systemd[1]: Reached target sysinit.target. Sep 9 00:25:51.765765 systemd[1]: Reached target basic.target. Sep 9 00:25:51.767022 systemd[1]: Finished dracut-initqueue.service. Sep 9 00:25:51.768185 systemd[1]: Reached target remote-fs-pre.target. Sep 9 00:25:51.769386 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:25:51.770374 systemd[1]: Reached target remote-fs.target. Sep 9 00:25:51.772113 systemd[1]: Starting dracut-pre-mount.service... Sep 9 00:25:51.779607 systemd[1]: Finished dracut-pre-mount.service. Sep 9 00:25:51.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.781021 systemd[1]: Starting systemd-fsck-root.service... Sep 9 00:25:51.791852 systemd-fsck[773]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 9 00:25:51.795703 systemd[1]: Finished systemd-fsck-root.service. Sep 9 00:25:51.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.798571 systemd[1]: Mounting sysroot.mount... Sep 9 00:25:51.804704 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 9 00:25:51.804975 systemd[1]: Mounted sysroot.mount. Sep 9 00:25:51.805677 systemd[1]: Reached target initrd-root-fs.target. Sep 9 00:25:51.807606 systemd[1]: Mounting sysroot-usr.mount... Sep 9 00:25:51.808510 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 9 00:25:51.808557 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:25:51.808584 systemd[1]: Reached target ignition-diskful.target. Sep 9 00:25:51.810276 systemd[1]: Mounted sysroot-usr.mount. Sep 9 00:25:51.811920 systemd[1]: Starting initrd-setup-root.service... Sep 9 00:25:51.816159 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:25:51.820335 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:25:51.824336 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:25:51.827976 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:25:51.853250 systemd[1]: Finished initrd-setup-root.service. Sep 9 00:25:51.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.854973 systemd[1]: Starting ignition-mount.service... Sep 9 00:25:51.856262 systemd[1]: Starting sysroot-boot.service... Sep 9 00:25:51.860909 bash[824]: umount: /sysroot/usr/share/oem: not mounted. Sep 9 00:25:51.869277 ignition[826]: INFO : Ignition 2.14.0 Sep 9 00:25:51.869277 ignition[826]: INFO : Stage: mount Sep 9 00:25:51.870501 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:25:51.870501 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:25:51.870501 ignition[826]: INFO : mount: mount passed Sep 9 00:25:51.870501 ignition[826]: INFO : Ignition finished successfully Sep 9 00:25:51.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:51.870814 systemd[1]: Finished ignition-mount.service. Sep 9 00:25:51.874670 systemd[1]: Finished sysroot-boot.service. Sep 9 00:25:51.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:52.420056 systemd-resolved[292]: Detected conflict on linux IN A 10.0.0.40 Sep 9 00:25:52.420073 systemd-resolved[292]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Sep 9 00:25:52.589434 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 9 00:25:52.597395 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (834) Sep 9 00:25:52.597431 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:25:52.598056 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:25:52.598076 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:25:52.601324 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 9 00:25:52.602813 systemd[1]: Starting ignition-files.service... Sep 9 00:25:52.617230 ignition[854]: INFO : Ignition 2.14.0 Sep 9 00:25:52.617230 ignition[854]: INFO : Stage: files Sep 9 00:25:52.618911 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:25:52.618911 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:25:52.618911 ignition[854]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:25:52.622462 ignition[854]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:25:52.622462 ignition[854]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:25:52.622462 ignition[854]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:25:52.622462 ignition[854]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:25:52.622462 ignition[854]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:25:52.622076 unknown[854]: wrote ssh authorized keys file for user: core Sep 9 00:25:52.629925 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:25:52.629925 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:25:52.629925 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:25:52.629925 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:25:52.629925 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 00:25:52.629925 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 00:25:52.629925 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 00:25:52.629925 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 9 00:25:52.996098 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Sep 9 00:25:53.374832 systemd-networkd[740]: eth0: Gained IPv6LL Sep 9 00:25:53.491279 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 00:25:53.491279 ignition[854]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Sep 9 00:25:53.491279 ignition[854]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:25:53.498464 ignition[854]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:25:53.498464 ignition[854]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Sep 9 00:25:53.498464 ignition[854]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:25:53.498464 ignition[854]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:25:53.535490 ignition[854]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:25:53.537685 ignition[854]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:25:53.537685 ignition[854]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:25:53.537685 ignition[854]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:25:53.537685 ignition[854]: INFO : files: files passed Sep 9 00:25:53.537685 ignition[854]: INFO : Ignition finished successfully Sep 9 00:25:53.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.537817 systemd[1]: Finished ignition-files.service. Sep 9 00:25:53.540897 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 9 00:25:53.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.542027 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 9 00:25:53.549013 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 9 00:25:53.542710 systemd[1]: Starting ignition-quench.service... Sep 9 00:25:53.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.551571 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:25:53.545743 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:25:53.545831 systemd[1]: Finished ignition-quench.service. Sep 9 00:25:53.549060 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 9 00:25:53.551136 systemd[1]: Reached target ignition-complete.target. Sep 9 00:25:53.553124 systemd[1]: Starting initrd-parse-etc.service... Sep 9 00:25:53.566978 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:25:53.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.567089 systemd[1]: Finished initrd-parse-etc.service. Sep 9 00:25:53.568048 systemd[1]: Reached target initrd-fs.target. Sep 9 00:25:53.569870 systemd[1]: Reached target initrd.target. Sep 9 00:25:53.571145 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 9 00:25:53.571981 systemd[1]: Starting dracut-pre-pivot.service... Sep 9 00:25:53.582644 systemd[1]: Finished dracut-pre-pivot.service. Sep 9 00:25:53.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.584480 systemd[1]: Starting initrd-cleanup.service... Sep 9 00:25:53.592972 systemd[1]: Stopped target nss-lookup.target. Sep 9 00:25:53.593925 systemd[1]: Stopped target remote-cryptsetup.target. Sep 9 00:25:53.595220 systemd[1]: Stopped target timers.target. Sep 9 00:25:53.596433 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:25:53.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.596562 systemd[1]: Stopped dracut-pre-pivot.service. Sep 9 00:25:53.597642 systemd[1]: Stopped target initrd.target. Sep 9 00:25:53.598850 systemd[1]: Stopped target basic.target. Sep 9 00:25:53.600020 systemd[1]: Stopped target ignition-complete.target. Sep 9 00:25:53.601377 systemd[1]: Stopped target ignition-diskful.target. Sep 9 00:25:53.602507 systemd[1]: Stopped target initrd-root-device.target. Sep 9 00:25:53.603805 systemd[1]: Stopped target remote-fs.target. Sep 9 00:25:53.605179 systemd[1]: Stopped target remote-fs-pre.target. Sep 9 00:25:53.606469 systemd[1]: Stopped target sysinit.target. Sep 9 00:25:53.607560 systemd[1]: Stopped target local-fs.target. Sep 9 00:25:53.608756 systemd[1]: Stopped target local-fs-pre.target. Sep 9 00:25:53.609911 systemd[1]: Stopped target swap.target. Sep 9 00:25:53.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.610951 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:25:53.611067 systemd[1]: Stopped dracut-pre-mount.service. Sep 9 00:25:53.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.612225 systemd[1]: Stopped target cryptsetup.target. Sep 9 00:25:53.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.613327 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:25:53.613426 systemd[1]: Stopped dracut-initqueue.service. Sep 9 00:25:53.614781 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:25:53.614879 systemd[1]: Stopped ignition-fetch-offline.service. Sep 9 00:25:53.616134 systemd[1]: Stopped target paths.target. Sep 9 00:25:53.617325 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:25:53.621719 systemd[1]: Stopped systemd-ask-password-console.path. Sep 9 00:25:53.623359 systemd[1]: Stopped target slices.target. Sep 9 00:25:53.624134 systemd[1]: Stopped target sockets.target. Sep 9 00:25:53.625292 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:25:53.625367 systemd[1]: Closed iscsid.socket. Sep 9 00:25:53.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.626393 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:25:53.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.626497 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 9 00:25:53.627745 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:25:53.627841 systemd[1]: Stopped ignition-files.service. Sep 9 00:25:53.629763 systemd[1]: Stopping ignition-mount.service... Sep 9 00:25:53.631573 systemd[1]: Stopping iscsiuio.service... Sep 9 00:25:53.633942 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:25:53.634086 systemd[1]: Stopped kmod-static-nodes.service. Sep 9 00:25:53.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.636392 systemd[1]: Stopping sysroot-boot.service... Sep 9 00:25:53.637684 ignition[895]: INFO : Ignition 2.14.0 Sep 9 00:25:53.637684 ignition[895]: INFO : Stage: umount Sep 9 00:25:53.637684 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:25:53.637684 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:25:53.637684 ignition[895]: INFO : umount: umount passed Sep 9 00:25:53.637684 ignition[895]: INFO : Ignition finished successfully Sep 9 00:25:53.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.637139 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:25:53.637273 systemd[1]: Stopped systemd-udev-trigger.service. Sep 9 00:25:53.638678 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:25:53.638824 systemd[1]: Stopped dracut-pre-trigger.service. Sep 9 00:25:53.641559 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 9 00:25:53.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.641676 systemd[1]: Stopped iscsiuio.service. Sep 9 00:25:53.643112 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:25:53.643216 systemd[1]: Stopped ignition-mount.service. Sep 9 00:25:53.644628 systemd[1]: Stopped target network.target. Sep 9 00:25:53.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.645727 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:25:53.645764 systemd[1]: Closed iscsiuio.socket. Sep 9 00:25:53.647836 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:25:53.647976 systemd[1]: Stopped ignition-disks.service. Sep 9 00:25:53.650025 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:25:53.650067 systemd[1]: Stopped ignition-kargs.service. Sep 9 00:25:53.651302 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:25:53.651339 systemd[1]: Stopped ignition-setup.service. Sep 9 00:25:53.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.652571 systemd[1]: Stopping systemd-networkd.service... Sep 9 00:25:53.653890 systemd[1]: Stopping systemd-resolved.service... Sep 9 00:25:53.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.655739 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:25:53.656279 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:25:53.668000 audit: BPF prog-id=6 op=UNLOAD Sep 9 00:25:53.656368 systemd[1]: Finished initrd-cleanup.service. Sep 9 00:25:53.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.663152 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:25:53.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.663254 systemd[1]: Stopped systemd-resolved.service. Sep 9 00:25:53.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.663755 systemd-networkd[740]: eth0: DHCPv6 lease lost Sep 9 00:25:53.665229 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:25:53.665328 systemd[1]: Stopped systemd-networkd.service. Sep 9 00:25:53.666741 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:25:53.666773 systemd[1]: Closed systemd-networkd.socket. Sep 9 00:25:53.668411 systemd[1]: Stopping network-cleanup.service... Sep 9 00:25:53.683000 audit: BPF prog-id=9 op=UNLOAD Sep 9 00:25:53.669282 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:25:53.669344 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 9 00:25:53.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.673392 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:25:53.673442 systemd[1]: Stopped systemd-sysctl.service. Sep 9 00:25:53.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.675309 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:25:53.675354 systemd[1]: Stopped systemd-modules-load.service. Sep 9 00:25:53.676148 systemd[1]: Stopping systemd-udevd.service... Sep 9 00:25:53.681752 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:25:53.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.684200 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:25:53.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.684313 systemd[1]: Stopped network-cleanup.service. Sep 9 00:25:53.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.687782 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:25:53.687915 systemd[1]: Stopped systemd-udevd.service. Sep 9 00:25:53.688864 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:25:53.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.688903 systemd[1]: Closed systemd-udevd-control.socket. Sep 9 00:25:53.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.690389 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:25:53.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.690419 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 9 00:25:53.691673 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:25:53.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.691732 systemd[1]: Stopped dracut-pre-udev.service. Sep 9 00:25:53.692809 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:25:53.692844 systemd[1]: Stopped dracut-cmdline.service. Sep 9 00:25:53.694911 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:25:53.694951 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 9 00:25:53.697003 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 9 00:25:53.698078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:25:53.698126 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 9 00:25:53.699546 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:25:53.699655 systemd[1]: Stopped sysroot-boot.service. Sep 9 00:25:53.700565 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:25:53.700603 systemd[1]: Stopped initrd-setup-root.service. Sep 9 00:25:53.702473 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:25:53.702571 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 9 00:25:53.703724 systemd[1]: Reached target initrd-switch-root.target. Sep 9 00:25:53.705707 systemd[1]: Starting initrd-switch-root.service... Sep 9 00:25:53.712755 systemd[1]: Switching root. Sep 9 00:25:53.734369 iscsid[751]: iscsid shutting down. Sep 9 00:25:53.735079 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). Sep 9 00:25:53.735127 systemd-journald[290]: Journal stopped Sep 9 00:25:55.836199 kernel: SELinux: Class mctp_socket not defined in policy. Sep 9 00:25:55.836257 kernel: SELinux: Class anon_inode not defined in policy. Sep 9 00:25:55.836269 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 9 00:25:55.836280 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:25:55.836368 kernel: SELinux: policy capability open_perms=1 Sep 9 00:25:55.836383 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:25:55.836393 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:25:55.836403 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:25:55.836415 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:25:55.836429 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:25:55.836438 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:25:55.836450 systemd[1]: Successfully loaded SELinux policy in 33.756ms. Sep 9 00:25:55.836465 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.605ms. Sep 9 00:25:55.836477 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:25:55.836490 systemd[1]: Detected virtualization kvm. Sep 9 00:25:55.836502 systemd[1]: Detected architecture arm64. Sep 9 00:25:55.836555 systemd[1]: Detected first boot. Sep 9 00:25:55.836571 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:25:55.836582 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 9 00:25:55.836592 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:25:55.836602 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:25:55.836614 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:25:55.836625 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:25:55.836636 kernel: kauditd_printk_skb: 79 callbacks suppressed Sep 9 00:25:55.836648 kernel: audit: type=1334 audit(1757377555.702:83): prog-id=12 op=LOAD Sep 9 00:25:55.836817 kernel: audit: type=1334 audit(1757377555.702:84): prog-id=3 op=UNLOAD Sep 9 00:25:55.836833 kernel: audit: type=1334 audit(1757377555.702:85): prog-id=13 op=LOAD Sep 9 00:25:55.836850 kernel: audit: type=1334 audit(1757377555.703:86): prog-id=14 op=LOAD Sep 9 00:25:55.837237 kernel: audit: type=1334 audit(1757377555.703:87): prog-id=4 op=UNLOAD Sep 9 00:25:55.837271 kernel: audit: type=1334 audit(1757377555.703:88): prog-id=5 op=UNLOAD Sep 9 00:25:55.837284 systemd[1]: iscsid.service: Deactivated successfully. Sep 9 00:25:55.837296 kernel: audit: type=1334 audit(1757377555.704:89): prog-id=15 op=LOAD Sep 9 00:25:55.837311 systemd[1]: Stopped iscsid.service. Sep 9 00:25:55.837321 kernel: audit: type=1334 audit(1757377555.704:90): prog-id=12 op=UNLOAD Sep 9 00:25:55.837332 kernel: audit: type=1334 audit(1757377555.705:91): prog-id=16 op=LOAD Sep 9 00:25:55.837708 kernel: audit: type=1334 audit(1757377555.705:92): prog-id=17 op=LOAD Sep 9 00:25:55.837730 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:25:55.837742 systemd[1]: Stopped initrd-switch-root.service. Sep 9 00:25:55.837753 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:25:55.837796 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 9 00:25:55.837812 systemd[1]: Created slice system-addon\x2drun.slice. Sep 9 00:25:55.837826 systemd[1]: Created slice system-getty.slice. Sep 9 00:25:55.838080 systemd[1]: Created slice system-modprobe.slice. Sep 9 00:25:55.838189 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 9 00:25:55.838202 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 9 00:25:55.838213 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 9 00:25:55.838368 systemd[1]: Created slice user.slice. Sep 9 00:25:55.838385 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:25:55.838398 systemd[1]: Started systemd-ask-password-wall.path. Sep 9 00:25:55.838410 systemd[1]: Set up automount boot.automount. Sep 9 00:25:55.838426 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 9 00:25:55.838438 systemd[1]: Stopped target initrd-switch-root.target. Sep 9 00:25:55.838450 systemd[1]: Stopped target initrd-fs.target. Sep 9 00:25:55.838461 systemd[1]: Stopped target initrd-root-fs.target. Sep 9 00:25:55.838474 systemd[1]: Reached target integritysetup.target. Sep 9 00:25:55.838487 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:25:55.838500 systemd[1]: Reached target remote-fs.target. Sep 9 00:25:55.838518 systemd[1]: Reached target slices.target. Sep 9 00:25:55.838530 systemd[1]: Reached target swap.target. Sep 9 00:25:55.838542 systemd[1]: Reached target torcx.target. Sep 9 00:25:55.838553 systemd[1]: Reached target veritysetup.target. Sep 9 00:25:55.838563 systemd[1]: Listening on systemd-coredump.socket. Sep 9 00:25:55.838579 systemd[1]: Listening on systemd-initctl.socket. Sep 9 00:25:55.838590 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:25:55.838601 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:25:55.838614 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:25:55.838624 systemd[1]: Listening on systemd-userdbd.socket. Sep 9 00:25:55.838635 systemd[1]: Mounting dev-hugepages.mount... Sep 9 00:25:55.838646 systemd[1]: Mounting dev-mqueue.mount... Sep 9 00:25:55.838657 systemd[1]: Mounting media.mount... Sep 9 00:25:55.838668 systemd[1]: Mounting sys-kernel-debug.mount... Sep 9 00:25:55.838680 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 9 00:25:55.838701 systemd[1]: Mounting tmp.mount... Sep 9 00:25:55.838713 systemd[1]: Starting flatcar-tmpfiles.service... Sep 9 00:25:55.838726 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:25:55.838738 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:25:55.838749 systemd[1]: Starting modprobe@configfs.service... Sep 9 00:25:55.838761 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:25:55.838883 systemd[1]: Starting modprobe@drm.service... Sep 9 00:25:55.838909 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:25:55.838921 systemd[1]: Starting modprobe@fuse.service... Sep 9 00:25:55.838931 systemd[1]: Starting modprobe@loop.service... Sep 9 00:25:55.838942 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:25:55.838956 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:25:55.838967 systemd[1]: Stopped systemd-fsck-root.service. Sep 9 00:25:55.838977 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:25:55.838988 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:25:55.839096 systemd[1]: Stopped systemd-journald.service. Sep 9 00:25:55.839113 systemd[1]: Starting systemd-journald.service... Sep 9 00:25:55.839124 kernel: fuse: init (API version 7.34) Sep 9 00:25:55.839135 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:25:55.839148 systemd[1]: Starting systemd-network-generator.service... Sep 9 00:25:55.839167 kernel: loop: module loaded Sep 9 00:25:55.839179 systemd[1]: Starting systemd-remount-fs.service... Sep 9 00:25:55.839217 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:25:55.839230 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:25:55.839242 systemd[1]: Stopped verity-setup.service. Sep 9 00:25:55.839254 systemd[1]: Mounted dev-hugepages.mount. Sep 9 00:25:55.839265 systemd[1]: Mounted dev-mqueue.mount. Sep 9 00:25:55.839276 systemd[1]: Mounted media.mount. Sep 9 00:25:55.839286 systemd[1]: Mounted sys-kernel-debug.mount. Sep 9 00:25:55.839297 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 9 00:25:55.839309 systemd[1]: Mounted tmp.mount. Sep 9 00:25:55.839319 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:25:55.839330 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:25:55.839341 systemd[1]: Finished modprobe@configfs.service. Sep 9 00:25:55.839353 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:25:55.839363 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:25:55.839376 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:25:55.839387 systemd[1]: Finished modprobe@drm.service. Sep 9 00:25:55.839397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:25:55.839413 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:25:55.839424 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:25:55.839434 systemd[1]: Finished modprobe@fuse.service. Sep 9 00:25:55.839444 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:25:55.839459 systemd-journald[994]: Journal started Sep 9 00:25:55.839592 systemd-journald[994]: Runtime Journal (/run/log/journal/de7c848c27a341d2b9784ec4643dc5b0) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:25:53.790000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:25:53.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 9 00:25:53.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 9 00:25:53.899000 audit: BPF prog-id=10 op=LOAD Sep 9 00:25:53.899000 audit: BPF prog-id=10 op=UNLOAD Sep 9 00:25:53.899000 audit: BPF prog-id=11 op=LOAD Sep 9 00:25:53.899000 audit: BPF prog-id=11 op=UNLOAD Sep 9 00:25:53.957000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 9 00:25:53.957000 audit[928]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400024d89c a1=4000150de0 a2=40001570c0 a3=32 items=0 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:25:53.957000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 9 00:25:53.959000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 9 00:25:53.959000 audit[928]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=400024d975 a2=1ed a3=0 items=2 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:25:53.959000 audit: CWD cwd="/" Sep 9 00:25:53.959000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:25:53.959000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:25:53.959000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 9 00:25:55.702000 audit: BPF prog-id=12 op=LOAD Sep 9 00:25:55.702000 audit: BPF prog-id=3 op=UNLOAD Sep 9 00:25:55.702000 audit: BPF prog-id=13 op=LOAD Sep 9 00:25:55.703000 audit: BPF prog-id=14 op=LOAD Sep 9 00:25:55.703000 audit: BPF prog-id=4 op=UNLOAD Sep 9 00:25:55.703000 audit: BPF prog-id=5 op=UNLOAD Sep 9 00:25:55.704000 audit: BPF prog-id=15 op=LOAD Sep 9 00:25:55.704000 audit: BPF prog-id=12 op=UNLOAD Sep 9 00:25:55.705000 audit: BPF prog-id=16 op=LOAD Sep 9 00:25:55.705000 audit: BPF prog-id=17 op=LOAD Sep 9 00:25:55.705000 audit: BPF prog-id=13 op=UNLOAD Sep 9 00:25:55.705000 audit: BPF prog-id=14 op=UNLOAD Sep 9 00:25:55.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.720000 audit: BPF prog-id=15 op=UNLOAD Sep 9 00:25:55.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.840738 systemd[1]: Finished modprobe@loop.service. Sep 9 00:25:55.801000 audit: BPF prog-id=18 op=LOAD Sep 9 00:25:55.802000 audit: BPF prog-id=19 op=LOAD Sep 9 00:25:55.802000 audit: BPF prog-id=20 op=LOAD Sep 9 00:25:55.802000 audit: BPF prog-id=16 op=UNLOAD Sep 9 00:25:55.802000 audit: BPF prog-id=17 op=UNLOAD Sep 9 00:25:55.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.842188 systemd[1]: Started systemd-journald.service. Sep 9 00:25:55.834000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 9 00:25:55.834000 audit[994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd6f68f10 a2=4000 a3=1 items=0 ppid=1 pid=994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:25:55.834000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 9 00:25:55.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.700841 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:25:53.956579 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:25:55.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.700854 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 9 00:25:53.956910 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 9 00:25:55.707083 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:25:53.956938 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 9 00:25:55.842970 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:25:53.956971 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 9 00:25:53.956981 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 9 00:25:53.957011 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 9 00:25:53.957023 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 9 00:25:53.957226 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 9 00:25:53.957260 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 9 00:25:53.957272 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 9 00:25:55.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:53.958107 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 9 00:25:53.958140 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 9 00:25:53.958159 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 9 00:25:55.844240 systemd[1]: Finished systemd-network-generator.service. Sep 9 00:25:53.958173 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 9 00:25:53.958190 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 9 00:25:53.958204 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 9 00:25:55.433768 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:55Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:25:55.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.434030 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:55Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:25:55.434134 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:55Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:25:55.434302 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:55Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:25:55.434353 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:55Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 9 00:25:55.434409 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:55Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 9 00:25:55.845407 systemd[1]: Finished systemd-remount-fs.service. Sep 9 00:25:55.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.846588 systemd[1]: Reached target network-pre.target. Sep 9 00:25:55.848729 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 9 00:25:55.850397 systemd[1]: Mounting sys-kernel-config.mount... Sep 9 00:25:55.851052 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:25:55.855622 systemd[1]: Starting systemd-hwdb-update.service... Sep 9 00:25:55.857877 systemd[1]: Starting systemd-journal-flush.service... Sep 9 00:25:55.858747 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:25:55.859752 systemd[1]: Starting systemd-random-seed.service... Sep 9 00:25:55.860450 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:25:55.862928 systemd-journald[994]: Time spent on flushing to /var/log/journal/de7c848c27a341d2b9784ec4643dc5b0 is 22.230ms for 975 entries. Sep 9 00:25:55.862928 systemd-journald[994]: System Journal (/var/log/journal/de7c848c27a341d2b9784ec4643dc5b0) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:25:55.898383 systemd-journald[994]: Received client request to flush runtime journal. Sep 9 00:25:55.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.862990 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:25:55.865908 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 9 00:25:55.867083 systemd[1]: Mounted sys-kernel-config.mount. Sep 9 00:25:55.879844 systemd[1]: Finished systemd-random-seed.service. Sep 9 00:25:55.880884 systemd[1]: Finished flatcar-tmpfiles.service. Sep 9 00:25:55.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.881763 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:25:55.882445 systemd[1]: Reached target first-boot-complete.target. Sep 9 00:25:55.884285 systemd[1]: Starting systemd-sysusers.service... Sep 9 00:25:55.899783 systemd[1]: Finished systemd-journal-flush.service. Sep 9 00:25:55.904463 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:25:55.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.906591 systemd[1]: Starting systemd-udev-settle.service... Sep 9 00:25:55.907520 systemd[1]: Finished systemd-sysusers.service. Sep 9 00:25:55.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:55.913328 udevadm[1031]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 00:25:56.254194 systemd[1]: Finished systemd-hwdb-update.service. Sep 9 00:25:56.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.254000 audit: BPF prog-id=21 op=LOAD Sep 9 00:25:56.254000 audit: BPF prog-id=22 op=LOAD Sep 9 00:25:56.254000 audit: BPF prog-id=7 op=UNLOAD Sep 9 00:25:56.254000 audit: BPF prog-id=8 op=UNLOAD Sep 9 00:25:56.256175 systemd[1]: Starting systemd-udevd.service... Sep 9 00:25:56.273675 systemd-udevd[1032]: Using default interface naming scheme 'v252'. Sep 9 00:25:56.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.289000 audit: BPF prog-id=23 op=LOAD Sep 9 00:25:56.288366 systemd[1]: Started systemd-udevd.service. Sep 9 00:25:56.290641 systemd[1]: Starting systemd-networkd.service... Sep 9 00:25:56.300000 audit: BPF prog-id=24 op=LOAD Sep 9 00:25:56.300000 audit: BPF prog-id=25 op=LOAD Sep 9 00:25:56.300000 audit: BPF prog-id=26 op=LOAD Sep 9 00:25:56.301899 systemd[1]: Starting systemd-userdbd.service... Sep 9 00:25:56.317083 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 9 00:25:56.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.331548 systemd[1]: Started systemd-userdbd.service. Sep 9 00:25:56.383555 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:25:56.388993 systemd-networkd[1039]: lo: Link UP Sep 9 00:25:56.389003 systemd-networkd[1039]: lo: Gained carrier Sep 9 00:25:56.389363 systemd-networkd[1039]: Enumeration completed Sep 9 00:25:56.389462 systemd[1]: Started systemd-networkd.service. Sep 9 00:25:56.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.390820 systemd-networkd[1039]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:25:56.392004 systemd-networkd[1039]: eth0: Link UP Sep 9 00:25:56.392015 systemd-networkd[1039]: eth0: Gained carrier Sep 9 00:25:56.406097 systemd[1]: Finished systemd-udev-settle.service. Sep 9 00:25:56.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.408101 systemd[1]: Starting lvm2-activation-early.service... Sep 9 00:25:56.408928 systemd-networkd[1039]: eth0: DHCPv4 address 10.0.0.40/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:25:56.417316 lvm[1065]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:25:56.454728 systemd[1]: Finished lvm2-activation-early.service. Sep 9 00:25:56.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.455567 systemd[1]: Reached target cryptsetup.target. Sep 9 00:25:56.457463 systemd[1]: Starting lvm2-activation.service... Sep 9 00:25:56.461385 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:25:56.495010 systemd[1]: Finished lvm2-activation.service. Sep 9 00:25:56.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.496674 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:25:56.497366 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:25:56.497392 systemd[1]: Reached target local-fs.target. Sep 9 00:25:56.498165 systemd[1]: Reached target machines.target. Sep 9 00:25:56.501928 systemd[1]: Starting ldconfig.service... Sep 9 00:25:56.504049 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:25:56.504107 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:56.505413 systemd[1]: Starting systemd-boot-update.service... Sep 9 00:25:56.508364 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 9 00:25:56.511875 systemd[1]: Starting systemd-machine-id-commit.service... Sep 9 00:25:56.514110 systemd[1]: Starting systemd-sysext.service... Sep 9 00:25:56.515197 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1068 (bootctl) Sep 9 00:25:56.516415 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 9 00:25:56.535677 systemd[1]: Unmounting usr-share-oem.mount... Sep 9 00:25:56.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.536895 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 9 00:25:56.546133 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 9 00:25:56.546334 systemd[1]: Unmounted usr-share-oem.mount. Sep 9 00:25:56.604719 kernel: loop0: detected capacity change from 0 to 211168 Sep 9 00:25:56.612923 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:25:56.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.615413 systemd[1]: Finished systemd-machine-id-commit.service. Sep 9 00:25:56.621837 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:25:56.624953 systemd-fsck[1079]: fsck.fat 4.2 (2021-01-31) Sep 9 00:25:56.624953 systemd-fsck[1079]: /dev/vda1: 236 files, 117310/258078 clusters Sep 9 00:25:56.629805 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 9 00:25:56.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.632767 systemd[1]: Mounting boot.mount... Sep 9 00:25:56.640729 kernel: loop1: detected capacity change from 0 to 211168 Sep 9 00:25:56.642068 systemd[1]: Mounted boot.mount. Sep 9 00:25:56.652303 systemd[1]: Finished systemd-boot-update.service. Sep 9 00:25:56.652391 (sd-sysext)[1084]: Using extensions 'kubernetes'. Sep 9 00:25:56.652789 (sd-sysext)[1084]: Merged extensions into '/usr'. Sep 9 00:25:56.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.672511 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:25:56.674217 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:25:56.675998 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:25:56.677686 systemd[1]: Starting modprobe@loop.service... Sep 9 00:25:56.678364 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:25:56.678517 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:56.679608 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:25:56.679760 systemd[1]: Finished modprobe@loop.service. Sep 9 00:25:56.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.681752 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:25:56.682041 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:25:56.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.684237 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:25:56.684356 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:25:56.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.685731 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:25:56.685858 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:25:56.814593 ldconfig[1067]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:25:56.820102 systemd[1]: Mounting usr-share-oem.mount... Sep 9 00:25:56.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.821085 systemd[1]: Finished ldconfig.service. Sep 9 00:25:56.825618 systemd[1]: Mounted usr-share-oem.mount. Sep 9 00:25:56.827570 systemd[1]: Finished systemd-sysext.service. Sep 9 00:25:56.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:56.829559 systemd[1]: Starting ensure-sysext.service... Sep 9 00:25:56.831242 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 9 00:25:56.836025 systemd[1]: Reloading. Sep 9 00:25:56.855428 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 9 00:25:56.864625 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-09-09T00:25:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:25:56.864654 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-09-09T00:25:56Z" level=info msg="torcx already run" Sep 9 00:25:56.871708 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:25:56.878950 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:25:56.932547 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:25:56.932587 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:25:56.948997 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:25:56.991000 audit: BPF prog-id=27 op=LOAD Sep 9 00:25:56.991000 audit: BPF prog-id=24 op=UNLOAD Sep 9 00:25:56.991000 audit: BPF prog-id=28 op=LOAD Sep 9 00:25:56.991000 audit: BPF prog-id=29 op=LOAD Sep 9 00:25:56.991000 audit: BPF prog-id=25 op=UNLOAD Sep 9 00:25:56.991000 audit: BPF prog-id=26 op=UNLOAD Sep 9 00:25:56.991000 audit: BPF prog-id=30 op=LOAD Sep 9 00:25:56.992000 audit: BPF prog-id=31 op=LOAD Sep 9 00:25:56.992000 audit: BPF prog-id=21 op=UNLOAD Sep 9 00:25:56.992000 audit: BPF prog-id=22 op=UNLOAD Sep 9 00:25:56.992000 audit: BPF prog-id=32 op=LOAD Sep 9 00:25:56.992000 audit: BPF prog-id=18 op=UNLOAD Sep 9 00:25:56.992000 audit: BPF prog-id=33 op=LOAD Sep 9 00:25:56.992000 audit: BPF prog-id=34 op=LOAD Sep 9 00:25:56.992000 audit: BPF prog-id=19 op=UNLOAD Sep 9 00:25:56.992000 audit: BPF prog-id=20 op=UNLOAD Sep 9 00:25:56.994000 audit: BPF prog-id=35 op=LOAD Sep 9 00:25:56.994000 audit: BPF prog-id=23 op=UNLOAD Sep 9 00:25:57.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.000765 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 9 00:25:57.003021 systemd[1]: Starting audit-rules.service... Sep 9 00:25:57.004777 systemd[1]: Starting clean-ca-certificates.service... Sep 9 00:25:57.006703 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 9 00:25:57.007000 audit: BPF prog-id=36 op=LOAD Sep 9 00:25:57.009000 audit: BPF prog-id=37 op=LOAD Sep 9 00:25:57.009304 systemd[1]: Starting systemd-resolved.service... Sep 9 00:25:57.011348 systemd[1]: Starting systemd-timesyncd.service... Sep 9 00:25:57.013826 systemd[1]: Starting systemd-update-utmp.service... Sep 9 00:25:57.015220 systemd[1]: Finished clean-ca-certificates.service. Sep 9 00:25:57.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.018270 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:25:57.018000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.022126 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:25:57.023384 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:25:57.025984 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:25:57.027960 systemd[1]: Starting modprobe@loop.service... Sep 9 00:25:57.028739 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:25:57.028919 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:57.029061 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:25:57.030218 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:25:57.030352 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:25:57.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.032011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:25:57.032134 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:25:57.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.033448 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:25:57.033568 systemd[1]: Finished modprobe@loop.service. Sep 9 00:25:57.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.036205 systemd[1]: Finished systemd-update-utmp.service. Sep 9 00:25:57.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.037732 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 9 00:25:57.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.040005 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:25:57.041510 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:25:57.043713 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:25:57.045636 systemd[1]: Starting modprobe@loop.service... Sep 9 00:25:57.046487 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:25:57.046635 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:57.048042 systemd[1]: Starting systemd-update-done.service... Sep 9 00:25:57.048980 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:25:57.050104 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:25:57.050252 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:25:57.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.051533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:25:57.051659 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:25:57.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.053060 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:25:57.053171 systemd[1]: Finished modprobe@loop.service. Sep 9 00:25:57.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.057162 systemd[1]: Finished systemd-update-done.service. Sep 9 00:25:57.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:57.059327 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:25:57.061711 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:25:57.064147 systemd[1]: Starting modprobe@drm.service... Sep 9 00:25:57.066162 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:25:57.068353 systemd[1]: Starting modprobe@loop.service... Sep 9 00:25:57.069440 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:25:57.069564 systemd-resolved[1156]: Positive Trust Anchors: Sep 9 00:25:57.069572 systemd-resolved[1156]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:25:57.069598 systemd-resolved[1156]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:25:57.069607 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:57.070155 systemd-timesyncd[1158]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:25:57.070217 systemd-timesyncd[1158]: Initial clock synchronization to Tue 2025-09-09 00:25:56.881002 UTC. Sep 9 00:25:57.070945 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 9 00:25:57.071964 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:25:57.072000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 9 00:25:57.072000 audit[1178]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe540e990 a2=420 a3=0 items=0 ppid=1150 pid=1178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:25:57.072000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 9 00:25:57.072877 systemd[1]: Started systemd-timesyncd.service. Sep 9 00:25:57.073037 augenrules[1178]: No rules Sep 9 00:25:57.074639 systemd[1]: Finished audit-rules.service. Sep 9 00:25:57.076069 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:25:57.076216 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:25:57.077792 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:25:57.077953 systemd[1]: Finished modprobe@drm.service. Sep 9 00:25:57.079272 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:25:57.079400 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:25:57.080475 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:25:57.080613 systemd[1]: Finished modprobe@loop.service. Sep 9 00:25:57.081993 systemd[1]: Reached target time-set.target. Sep 9 00:25:57.082610 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:25:57.082651 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:25:57.083200 systemd[1]: Finished ensure-sysext.service. Sep 9 00:25:57.083296 systemd-resolved[1156]: Defaulting to hostname 'linux'. Sep 9 00:25:57.084837 systemd[1]: Started systemd-resolved.service. Sep 9 00:25:57.085527 systemd[1]: Reached target network.target. Sep 9 00:25:57.086347 systemd[1]: Reached target nss-lookup.target. Sep 9 00:25:57.086992 systemd[1]: Reached target sysinit.target. Sep 9 00:25:57.087625 systemd[1]: Started motdgen.path. Sep 9 00:25:57.088225 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 9 00:25:57.089399 systemd[1]: Started logrotate.timer. Sep 9 00:25:57.090104 systemd[1]: Started mdadm.timer. Sep 9 00:25:57.090621 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 9 00:25:57.091353 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:25:57.091388 systemd[1]: Reached target paths.target. Sep 9 00:25:57.091975 systemd[1]: Reached target timers.target. Sep 9 00:25:57.092988 systemd[1]: Listening on dbus.socket. Sep 9 00:25:57.094741 systemd[1]: Starting docker.socket... Sep 9 00:25:57.097835 systemd[1]: Listening on sshd.socket. Sep 9 00:25:57.098526 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:57.099084 systemd[1]: Listening on docker.socket. Sep 9 00:25:57.099762 systemd[1]: Reached target sockets.target. Sep 9 00:25:57.100474 systemd[1]: Reached target basic.target. Sep 9 00:25:57.101569 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:25:57.101600 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:25:57.102741 systemd[1]: Starting containerd.service... Sep 9 00:25:57.104410 systemd[1]: Starting dbus.service... Sep 9 00:25:57.106113 systemd[1]: Starting enable-oem-cloudinit.service... Sep 9 00:25:57.108242 systemd[1]: Starting extend-filesystems.service... Sep 9 00:25:57.109054 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 9 00:25:57.110302 systemd[1]: Starting motdgen.service... Sep 9 00:25:57.110931 jq[1193]: false Sep 9 00:25:57.112387 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 9 00:25:57.115016 systemd[1]: Starting sshd-keygen.service... Sep 9 00:25:57.118139 systemd[1]: Starting systemd-logind.service... Sep 9 00:25:57.118758 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:57.118834 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:25:57.119358 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:25:57.120106 systemd[1]: Starting update-engine.service... Sep 9 00:25:57.121959 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 9 00:25:57.125290 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:25:57.125438 jq[1208]: true Sep 9 00:25:57.131124 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 9 00:25:57.131499 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:25:57.131890 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 9 00:25:57.135330 extend-filesystems[1194]: Found loop1 Sep 9 00:25:57.135400 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:25:57.135580 systemd[1]: Finished motdgen.service. Sep 9 00:25:57.136516 extend-filesystems[1194]: Found vda Sep 9 00:25:57.137938 extend-filesystems[1194]: Found vda1 Sep 9 00:25:57.137938 extend-filesystems[1194]: Found vda2 Sep 9 00:25:57.137938 extend-filesystems[1194]: Found vda3 Sep 9 00:25:57.142532 extend-filesystems[1194]: Found usr Sep 9 00:25:57.142532 extend-filesystems[1194]: Found vda4 Sep 9 00:25:57.142532 extend-filesystems[1194]: Found vda6 Sep 9 00:25:57.142532 extend-filesystems[1194]: Found vda7 Sep 9 00:25:57.142532 extend-filesystems[1194]: Found vda9 Sep 9 00:25:57.142532 extend-filesystems[1194]: Checking size of /dev/vda9 Sep 9 00:25:57.154222 jq[1213]: true Sep 9 00:25:57.146223 systemd[1]: Started dbus.service. Sep 9 00:25:57.145971 dbus-daemon[1192]: [system] SELinux support is enabled Sep 9 00:25:57.152589 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:25:57.152613 systemd[1]: Reached target system-config.target. Sep 9 00:25:57.153430 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:25:57.153445 systemd[1]: Reached target user-config.target. Sep 9 00:25:57.167067 extend-filesystems[1194]: Resized partition /dev/vda9 Sep 9 00:25:57.170850 extend-filesystems[1239]: resize2fs 1.46.5 (30-Dec-2021) Sep 9 00:25:57.172679 update_engine[1205]: I0909 00:25:57.170782 1205 main.cc:92] Flatcar Update Engine starting Sep 9 00:25:57.179770 systemd[1]: Started update-engine.service. Sep 9 00:25:57.180038 update_engine[1205]: I0909 00:25:57.179811 1205 update_check_scheduler.cc:74] Next update check in 4m2s Sep 9 00:25:57.183225 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:25:57.182720 systemd[1]: Started locksmithd.service. Sep 9 00:25:57.183129 systemd-logind[1202]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 00:25:57.184836 systemd-logind[1202]: New seat seat0. Sep 9 00:25:57.189979 systemd[1]: Started systemd-logind.service. Sep 9 00:25:57.209872 env[1214]: time="2025-09-09T00:25:57.209802720Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 9 00:25:57.215718 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:25:57.230518 extend-filesystems[1239]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:25:57.230518 extend-filesystems[1239]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:25:57.230518 extend-filesystems[1239]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:25:57.236110 extend-filesystems[1194]: Resized filesystem in /dev/vda9 Sep 9 00:25:57.232489 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:25:57.237273 env[1214]: time="2025-09-09T00:25:57.230727800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:25:57.237273 env[1214]: time="2025-09-09T00:25:57.230886000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:25:57.237273 env[1214]: time="2025-09-09T00:25:57.232092560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.191-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:25:57.237273 env[1214]: time="2025-09-09T00:25:57.232122320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:25:57.237273 env[1214]: time="2025-09-09T00:25:57.232347160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:25:57.237273 env[1214]: time="2025-09-09T00:25:57.232365680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:25:57.237273 env[1214]: time="2025-09-09T00:25:57.232378000Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 9 00:25:57.237273 env[1214]: time="2025-09-09T00:25:57.232388080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:25:57.237273 env[1214]: time="2025-09-09T00:25:57.232459680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:25:57.237273 env[1214]: time="2025-09-09T00:25:57.232766880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:25:57.232669 systemd[1]: Finished extend-filesystems.service. Sep 9 00:25:57.237638 env[1214]: time="2025-09-09T00:25:57.232894080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:25:57.237638 env[1214]: time="2025-09-09T00:25:57.232909440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:25:57.237638 env[1214]: time="2025-09-09T00:25:57.232963880Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 9 00:25:57.237638 env[1214]: time="2025-09-09T00:25:57.232975920Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:25:57.238376 bash[1240]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:25:57.239103 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 9 00:25:57.240854 env[1214]: time="2025-09-09T00:25:57.240818640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:25:57.240931 env[1214]: time="2025-09-09T00:25:57.240861960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:25:57.240931 env[1214]: time="2025-09-09T00:25:57.240875520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:25:57.240931 env[1214]: time="2025-09-09T00:25:57.240912480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:25:57.240931 env[1214]: time="2025-09-09T00:25:57.240927600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:25:57.241025 env[1214]: time="2025-09-09T00:25:57.240949520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:25:57.241025 env[1214]: time="2025-09-09T00:25:57.240962280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:25:57.241391 env[1214]: time="2025-09-09T00:25:57.241317480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:25:57.241391 env[1214]: time="2025-09-09T00:25:57.241340800Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 9 00:25:57.241391 env[1214]: time="2025-09-09T00:25:57.241354200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:25:57.241391 env[1214]: time="2025-09-09T00:25:57.241369840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:25:57.241391 env[1214]: time="2025-09-09T00:25:57.241382120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:25:57.241537 env[1214]: time="2025-09-09T00:25:57.241524360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:25:57.241636 env[1214]: time="2025-09-09T00:25:57.241609200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:25:57.241882 env[1214]: time="2025-09-09T00:25:57.241854680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:25:57.241926 env[1214]: time="2025-09-09T00:25:57.241885960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:25:57.241926 env[1214]: time="2025-09-09T00:25:57.241900400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:25:57.242038 env[1214]: time="2025-09-09T00:25:57.242004960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:25:57.242038 env[1214]: time="2025-09-09T00:25:57.242021800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:25:57.242038 env[1214]: time="2025-09-09T00:25:57.242034280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:25:57.242107 env[1214]: time="2025-09-09T00:25:57.242047320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:25:57.242107 env[1214]: time="2025-09-09T00:25:57.242059680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:25:57.242107 env[1214]: time="2025-09-09T00:25:57.242078160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:25:57.242107 env[1214]: time="2025-09-09T00:25:57.242089280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:25:57.242107 env[1214]: time="2025-09-09T00:25:57.242100560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:25:57.242206 env[1214]: time="2025-09-09T00:25:57.242124840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:25:57.242269 env[1214]: time="2025-09-09T00:25:57.242244600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:25:57.242306 env[1214]: time="2025-09-09T00:25:57.242270400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:25:57.242306 env[1214]: time="2025-09-09T00:25:57.242283400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:25:57.242306 env[1214]: time="2025-09-09T00:25:57.242294560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:25:57.242362 env[1214]: time="2025-09-09T00:25:57.242308360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 9 00:25:57.242362 env[1214]: time="2025-09-09T00:25:57.242320640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:25:57.242362 env[1214]: time="2025-09-09T00:25:57.242337120Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 9 00:25:57.242421 env[1214]: time="2025-09-09T00:25:57.242372120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:25:57.242647 env[1214]: time="2025-09-09T00:25:57.242584560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:25:57.243332 env[1214]: time="2025-09-09T00:25:57.242652800Z" level=info msg="Connect containerd service" Sep 9 00:25:57.243332 env[1214]: time="2025-09-09T00:25:57.242707840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:25:57.244925 locksmithd[1241]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:25:57.245426 env[1214]: time="2025-09-09T00:25:57.245382560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:25:57.247779 env[1214]: time="2025-09-09T00:25:57.245835320Z" level=info msg="Start subscribing containerd event" Sep 9 00:25:57.247779 env[1214]: time="2025-09-09T00:25:57.245884680Z" level=info msg="Start recovering state" Sep 9 00:25:57.247779 env[1214]: time="2025-09-09T00:25:57.245963680Z" level=info msg="Start event monitor" Sep 9 00:25:57.247779 env[1214]: time="2025-09-09T00:25:57.245977040Z" level=info msg="Start snapshots syncer" Sep 9 00:25:57.247779 env[1214]: time="2025-09-09T00:25:57.245991600Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:25:57.247779 env[1214]: time="2025-09-09T00:25:57.246001040Z" level=info msg="Start streaming server" Sep 9 00:25:57.247779 env[1214]: time="2025-09-09T00:25:57.245858720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:25:57.247779 env[1214]: time="2025-09-09T00:25:57.246149120Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:25:57.247779 env[1214]: time="2025-09-09T00:25:57.246267200Z" level=info msg="containerd successfully booted in 0.045789s" Sep 9 00:25:57.246363 systemd[1]: Started containerd.service. Sep 9 00:25:58.429949 systemd-networkd[1039]: eth0: Gained IPv6LL Sep 9 00:25:58.432564 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 9 00:25:58.433742 systemd[1]: Reached target network-online.target. Sep 9 00:25:58.435802 systemd[1]: Starting kubelet.service... Sep 9 00:25:58.948511 sshd_keygen[1212]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:25:58.966371 systemd[1]: Finished sshd-keygen.service. Sep 9 00:25:58.968613 systemd[1]: Starting issuegen.service... Sep 9 00:25:58.973551 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:25:58.973713 systemd[1]: Finished issuegen.service. Sep 9 00:25:58.975581 systemd[1]: Starting systemd-user-sessions.service... Sep 9 00:25:58.981955 systemd[1]: Finished systemd-user-sessions.service. Sep 9 00:25:58.984159 systemd[1]: Started getty@tty1.service. Sep 9 00:25:58.986342 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 9 00:25:58.987490 systemd[1]: Reached target getty.target. Sep 9 00:25:59.057824 systemd[1]: Started kubelet.service. Sep 9 00:25:59.059660 systemd[1]: Reached target multi-user.target. Sep 9 00:25:59.063864 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 9 00:25:59.074845 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 9 00:25:59.074996 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 9 00:25:59.076272 systemd[1]: Startup finished in 565ms (kernel) + 4.199s (initrd) + 5.320s (userspace) = 10.085s. Sep 9 00:25:59.514165 kubelet[1271]: E0909 00:25:59.514079 1271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:25:59.516070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:25:59.516208 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:26:02.576626 systemd[1]: Created slice system-sshd.slice. Sep 9 00:26:02.577830 systemd[1]: Started sshd@0-10.0.0.40:22-10.0.0.1:49336.service. Sep 9 00:26:02.615718 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 49336 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:02.618034 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:02.628453 systemd-logind[1202]: New session 1 of user core. Sep 9 00:26:02.629343 systemd[1]: Created slice user-500.slice. Sep 9 00:26:02.630420 systemd[1]: Starting user-runtime-dir@500.service... Sep 9 00:26:02.638222 systemd[1]: Finished user-runtime-dir@500.service. Sep 9 00:26:02.640464 systemd[1]: Starting user@500.service... Sep 9 00:26:02.642814 (systemd)[1283]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:02.701609 systemd[1283]: Queued start job for default target default.target. Sep 9 00:26:02.702136 systemd[1283]: Reached target paths.target. Sep 9 00:26:02.702167 systemd[1283]: Reached target sockets.target. Sep 9 00:26:02.702177 systemd[1283]: Reached target timers.target. Sep 9 00:26:02.702187 systemd[1283]: Reached target basic.target. Sep 9 00:26:02.702226 systemd[1283]: Reached target default.target. Sep 9 00:26:02.702249 systemd[1283]: Startup finished in 54ms. Sep 9 00:26:02.702410 systemd[1]: Started user@500.service. Sep 9 00:26:02.703619 systemd[1]: Started session-1.scope. Sep 9 00:26:02.754890 systemd[1]: Started sshd@1-10.0.0.40:22-10.0.0.1:49348.service. Sep 9 00:26:02.786317 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 49348 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:02.788486 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:02.796097 systemd[1]: Started session-2.scope. Sep 9 00:26:02.796683 systemd-logind[1202]: New session 2 of user core. Sep 9 00:26:02.851600 sshd[1292]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:02.854973 systemd[1]: Started sshd@2-10.0.0.40:22-10.0.0.1:49350.service. Sep 9 00:26:02.856091 systemd-logind[1202]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:26:02.856120 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:26:02.856916 systemd[1]: sshd@1-10.0.0.40:22-10.0.0.1:49348.service: Deactivated successfully. Sep 9 00:26:02.857788 systemd-logind[1202]: Removed session 2. Sep 9 00:26:02.885642 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 49350 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:02.886733 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:02.889812 systemd-logind[1202]: New session 3 of user core. Sep 9 00:26:02.890561 systemd[1]: Started session-3.scope. Sep 9 00:26:02.942657 sshd[1297]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:02.946038 systemd[1]: Started sshd@3-10.0.0.40:22-10.0.0.1:49366.service. Sep 9 00:26:02.946515 systemd[1]: sshd@2-10.0.0.40:22-10.0.0.1:49350.service: Deactivated successfully. Sep 9 00:26:02.947078 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:26:02.947556 systemd-logind[1202]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:26:02.948430 systemd-logind[1202]: Removed session 3. Sep 9 00:26:02.977007 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 49366 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:02.978502 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:02.984238 systemd-logind[1202]: New session 4 of user core. Sep 9 00:26:02.984998 systemd[1]: Started session-4.scope. Sep 9 00:26:03.039399 sshd[1303]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:03.043205 systemd[1]: sshd@3-10.0.0.40:22-10.0.0.1:49366.service: Deactivated successfully. Sep 9 00:26:03.043744 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:26:03.044205 systemd-logind[1202]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:26:03.045195 systemd[1]: Started sshd@4-10.0.0.40:22-10.0.0.1:49372.service. Sep 9 00:26:03.045869 systemd-logind[1202]: Removed session 4. Sep 9 00:26:03.075990 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 49372 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:03.077170 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:03.080482 systemd-logind[1202]: New session 5 of user core. Sep 9 00:26:03.081281 systemd[1]: Started session-5.scope. Sep 9 00:26:03.137115 sudo[1313]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:26:03.137325 sudo[1313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 9 00:26:03.149462 systemd[1]: Starting coreos-metadata.service... Sep 9 00:26:03.155487 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:26:03.155751 systemd[1]: Finished coreos-metadata.service. Sep 9 00:26:03.605779 systemd[1]: Stopped kubelet.service. Sep 9 00:26:03.607768 systemd[1]: Starting kubelet.service... Sep 9 00:26:03.629849 systemd[1]: Reloading. Sep 9 00:26:03.680490 /usr/lib/systemd/system-generators/torcx-generator[1374]: time="2025-09-09T00:26:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:26:03.680520 /usr/lib/systemd/system-generators/torcx-generator[1374]: time="2025-09-09T00:26:03Z" level=info msg="torcx already run" Sep 9 00:26:03.770085 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:26:03.770111 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:26:03.785227 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:26:03.849173 systemd[1]: Started kubelet.service. Sep 9 00:26:03.850529 systemd[1]: Stopping kubelet.service... Sep 9 00:26:03.850937 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:26:03.851095 systemd[1]: Stopped kubelet.service. Sep 9 00:26:03.852573 systemd[1]: Starting kubelet.service... Sep 9 00:26:03.950336 systemd[1]: Started kubelet.service. Sep 9 00:26:03.989676 kubelet[1418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:26:03.989676 kubelet[1418]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:26:03.989676 kubelet[1418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:26:03.990021 kubelet[1418]: I0909 00:26:03.989740 1418 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:26:06.165351 kubelet[1418]: I0909 00:26:06.165311 1418 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:26:06.165705 kubelet[1418]: I0909 00:26:06.165676 1418 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:26:06.166028 kubelet[1418]: I0909 00:26:06.166007 1418 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:26:06.194950 kubelet[1418]: I0909 00:26:06.194914 1418 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:26:06.204650 kubelet[1418]: E0909 00:26:06.204602 1418 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:26:06.204650 kubelet[1418]: I0909 00:26:06.204645 1418 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:26:06.209535 kubelet[1418]: I0909 00:26:06.209506 1418 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:26:06.211414 kubelet[1418]: I0909 00:26:06.211374 1418 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:26:06.211705 kubelet[1418]: I0909 00:26:06.211529 1418 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.40","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:26:06.211893 kubelet[1418]: I0909 00:26:06.211880 1418 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:26:06.211957 kubelet[1418]: I0909 00:26:06.211947 1418 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:26:06.212200 kubelet[1418]: I0909 00:26:06.212186 1418 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:26:06.215043 kubelet[1418]: I0909 00:26:06.215020 1418 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:26:06.215138 kubelet[1418]: I0909 00:26:06.215127 1418 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:26:06.215207 kubelet[1418]: I0909 00:26:06.215198 1418 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:26:06.215261 kubelet[1418]: I0909 00:26:06.215252 1418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:26:06.216572 kubelet[1418]: E0909 00:26:06.216554 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:06.216677 kubelet[1418]: E0909 00:26:06.216663 1418 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:06.216843 kubelet[1418]: I0909 00:26:06.216828 1418 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 9 00:26:06.217581 kubelet[1418]: I0909 00:26:06.217560 1418 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:26:06.217782 kubelet[1418]: W0909 00:26:06.217770 1418 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:26:06.225752 kubelet[1418]: I0909 00:26:06.225722 1418 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:26:06.225835 kubelet[1418]: I0909 00:26:06.225794 1418 server.go:1289] "Started kubelet" Sep 9 00:26:06.226456 kubelet[1418]: I0909 00:26:06.226415 1418 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:26:06.227291 kubelet[1418]: I0909 00:26:06.227262 1418 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:26:06.228423 kubelet[1418]: I0909 00:26:06.228319 1418 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:26:06.229148 kubelet[1418]: I0909 00:26:06.229112 1418 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:26:06.230356 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 9 00:26:06.230616 kubelet[1418]: I0909 00:26:06.230599 1418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:26:06.232208 kubelet[1418]: E0909 00:26:06.232124 1418 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:26:06.232845 kubelet[1418]: I0909 00:26:06.232822 1418 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:26:06.233128 kubelet[1418]: E0909 00:26:06.233083 1418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" Sep 9 00:26:06.233128 kubelet[1418]: I0909 00:26:06.233112 1418 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:26:06.233350 kubelet[1418]: I0909 00:26:06.233329 1418 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:26:06.233585 kubelet[1418]: I0909 00:26:06.233447 1418 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:26:06.233879 kubelet[1418]: I0909 00:26:06.233840 1418 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:26:06.233956 kubelet[1418]: I0909 00:26:06.233938 1418 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:26:06.234204 kubelet[1418]: E0909 00:26:06.234181 1418 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:26:06.234346 kubelet[1418]: E0909 00:26:06.234325 1418 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.40\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:26:06.235527 kubelet[1418]: I0909 00:26:06.235504 1418 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:26:06.236100 kubelet[1418]: E0909 00:26:06.233873 1418 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.40.18637595e32046f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.40,UID:10.0.0.40,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.40,},FirstTimestamp:2025-09-09 00:26:06.225745655 +0000 UTC m=+2.271301126,LastTimestamp:2025-09-09 00:26:06.225745655 +0000 UTC m=+2.271301126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.40,}" Sep 9 00:26:06.250418 kubelet[1418]: I0909 00:26:06.250385 1418 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:26:06.250418 kubelet[1418]: I0909 00:26:06.250410 1418 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:26:06.250533 kubelet[1418]: I0909 00:26:06.250437 1418 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:26:06.250706 kubelet[1418]: E0909 00:26:06.250652 1418 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.40\" not found" node="10.0.0.40" Sep 9 00:26:06.333488 kubelet[1418]: E0909 00:26:06.333436 1418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" Sep 9 00:26:06.352898 kubelet[1418]: I0909 00:26:06.352874 1418 policy_none.go:49] "None policy: Start" Sep 9 00:26:06.352898 kubelet[1418]: I0909 00:26:06.352903 1418 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:26:06.353010 kubelet[1418]: I0909 00:26:06.352916 1418 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:26:06.357563 systemd[1]: Created slice kubepods.slice. Sep 9 00:26:06.361520 systemd[1]: Created slice kubepods-burstable.slice. Sep 9 00:26:06.363937 systemd[1]: Created slice kubepods-besteffort.slice. Sep 9 00:26:06.373551 kubelet[1418]: E0909 00:26:06.373521 1418 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:26:06.373800 kubelet[1418]: I0909 00:26:06.373779 1418 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:26:06.374221 kubelet[1418]: I0909 00:26:06.374178 1418 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:26:06.374546 kubelet[1418]: I0909 00:26:06.374525 1418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:26:06.375716 kubelet[1418]: E0909 00:26:06.375687 1418 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:26:06.375819 kubelet[1418]: E0909 00:26:06.375806 1418 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.40\" not found" Sep 9 00:26:06.411401 kubelet[1418]: I0909 00:26:06.411345 1418 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:26:06.412416 kubelet[1418]: I0909 00:26:06.412384 1418 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:26:06.412416 kubelet[1418]: I0909 00:26:06.412406 1418 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:26:06.412497 kubelet[1418]: I0909 00:26:06.412426 1418 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:26:06.412497 kubelet[1418]: I0909 00:26:06.412432 1418 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:26:06.412497 kubelet[1418]: E0909 00:26:06.412474 1418 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 9 00:26:06.475526 kubelet[1418]: I0909 00:26:06.475429 1418 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.40" Sep 9 00:26:06.481678 kubelet[1418]: I0909 00:26:06.481643 1418 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.40" Sep 9 00:26:06.481678 kubelet[1418]: E0909 00:26:06.481676 1418 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.40\": node \"10.0.0.40\" not found" Sep 9 00:26:06.489566 kubelet[1418]: E0909 00:26:06.489536 1418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" Sep 9 00:26:06.590296 kubelet[1418]: E0909 00:26:06.590238 1418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" Sep 9 00:26:06.690657 kubelet[1418]: E0909 00:26:06.690607 1418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" Sep 9 00:26:06.791860 kubelet[1418]: E0909 00:26:06.791742 1418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" Sep 9 00:26:06.892313 kubelet[1418]: E0909 00:26:06.892250 1418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" Sep 9 00:26:06.976479 sudo[1313]: pam_unix(sudo:session): session closed for user root Sep 9 00:26:06.979218 sshd[1310]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:06.982463 systemd[1]: sshd@4-10.0.0.40:22-10.0.0.1:49372.service: Deactivated successfully. Sep 9 00:26:06.983464 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:26:06.984562 systemd-logind[1202]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:26:06.986184 systemd-logind[1202]: Removed session 5. Sep 9 00:26:06.992855 kubelet[1418]: E0909 00:26:06.992815 1418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" Sep 9 00:26:07.093663 kubelet[1418]: E0909 00:26:07.093524 1418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" Sep 9 00:26:07.168232 kubelet[1418]: I0909 00:26:07.168194 1418 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 9 00:26:07.168569 kubelet[1418]: I0909 00:26:07.168380 1418 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Sep 9 00:26:07.168569 kubelet[1418]: I0909 00:26:07.168415 1418 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Sep 9 00:26:07.168569 kubelet[1418]: I0909 00:26:07.168439 1418 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Sep 9 00:26:07.194673 kubelet[1418]: E0909 00:26:07.194636 1418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" Sep 9 00:26:07.216870 kubelet[1418]: E0909 00:26:07.216834 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:07.295475 kubelet[1418]: E0909 00:26:07.295429 1418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" Sep 9 00:26:07.397122 kubelet[1418]: I0909 00:26:07.396999 1418 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 9 00:26:07.397446 env[1214]: time="2025-09-09T00:26:07.397331077Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:26:07.397759 kubelet[1418]: I0909 00:26:07.397588 1418 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 9 00:26:08.217190 kubelet[1418]: E0909 00:26:08.217159 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:08.222829 kubelet[1418]: I0909 00:26:08.222804 1418 apiserver.go:52] "Watching apiserver" Sep 9 00:26:08.243389 systemd[1]: Created slice kubepods-burstable-poddccfe4cd_8b2d_49e9_be67_7c671d8c1d4d.slice. Sep 9 00:26:08.246799 kubelet[1418]: I0909 00:26:08.246736 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-hubble-tls\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.246799 kubelet[1418]: I0909 00:26:08.246790 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-hostproc\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.246799 kubelet[1418]: I0909 00:26:08.246809 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-lib-modules\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.246981 kubelet[1418]: I0909 00:26:08.246826 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-xtables-lock\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.246981 kubelet[1418]: I0909 00:26:08.246841 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-host-proc-sys-net\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.246981 kubelet[1418]: I0909 00:26:08.246856 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-host-proc-sys-kernel\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.246981 kubelet[1418]: I0909 00:26:08.246875 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-862hm\" (UniqueName: \"kubernetes.io/projected/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-kube-api-access-862hm\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.246981 kubelet[1418]: I0909 00:26:08.246896 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cilium-run\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.246981 kubelet[1418]: I0909 00:26:08.246910 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-bpf-maps\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.247122 kubelet[1418]: I0909 00:26:08.246927 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cilium-cgroup\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.247122 kubelet[1418]: I0909 00:26:08.246941 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cni-path\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.247122 kubelet[1418]: I0909 00:26:08.246955 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-etc-cni-netd\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.247122 kubelet[1418]: I0909 00:26:08.246971 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-clustermesh-secrets\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.247122 kubelet[1418]: I0909 00:26:08.246984 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cilium-config-path\") pod \"cilium-xbzrl\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " pod="kube-system/cilium-xbzrl" Sep 9 00:26:08.261665 systemd[1]: Created slice kubepods-besteffort-pod4a9589ef_fecf_4801_804c_8af60ac67f73.slice. Sep 9 00:26:08.334556 kubelet[1418]: I0909 00:26:08.334480 1418 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:26:08.347609 kubelet[1418]: I0909 00:26:08.347557 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj5pb\" (UniqueName: \"kubernetes.io/projected/4a9589ef-fecf-4801-804c-8af60ac67f73-kube-api-access-kj5pb\") pod \"kube-proxy-znhcd\" (UID: \"4a9589ef-fecf-4801-804c-8af60ac67f73\") " pod="kube-system/kube-proxy-znhcd" Sep 9 00:26:08.347736 kubelet[1418]: I0909 00:26:08.347652 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a9589ef-fecf-4801-804c-8af60ac67f73-xtables-lock\") pod \"kube-proxy-znhcd\" (UID: \"4a9589ef-fecf-4801-804c-8af60ac67f73\") " pod="kube-system/kube-proxy-znhcd" Sep 9 00:26:08.347770 kubelet[1418]: I0909 00:26:08.347751 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a9589ef-fecf-4801-804c-8af60ac67f73-kube-proxy\") pod \"kube-proxy-znhcd\" (UID: \"4a9589ef-fecf-4801-804c-8af60ac67f73\") " pod="kube-system/kube-proxy-znhcd" Sep 9 00:26:08.347812 kubelet[1418]: I0909 00:26:08.347795 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a9589ef-fecf-4801-804c-8af60ac67f73-lib-modules\") pod \"kube-proxy-znhcd\" (UID: \"4a9589ef-fecf-4801-804c-8af60ac67f73\") " pod="kube-system/kube-proxy-znhcd" Sep 9 00:26:08.348308 kubelet[1418]: I0909 00:26:08.348276 1418 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 9 00:26:08.559959 kubelet[1418]: E0909 00:26:08.559917 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:08.560726 env[1214]: time="2025-09-09T00:26:08.560633859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xbzrl,Uid:dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d,Namespace:kube-system,Attempt:0,}" Sep 9 00:26:08.575915 kubelet[1418]: E0909 00:26:08.575867 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:08.576747 env[1214]: time="2025-09-09T00:26:08.576366595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-znhcd,Uid:4a9589ef-fecf-4801-804c-8af60ac67f73,Namespace:kube-system,Attempt:0,}" Sep 9 00:26:09.107812 env[1214]: time="2025-09-09T00:26:09.107762203Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:09.109889 env[1214]: time="2025-09-09T00:26:09.109852808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:09.111526 env[1214]: time="2025-09-09T00:26:09.111497572Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:09.112937 env[1214]: time="2025-09-09T00:26:09.112900975Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:09.113531 env[1214]: time="2025-09-09T00:26:09.113510482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:09.118957 env[1214]: time="2025-09-09T00:26:09.118845142Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:09.121240 env[1214]: time="2025-09-09T00:26:09.121152004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:09.124150 env[1214]: time="2025-09-09T00:26:09.124112451Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:09.142317 env[1214]: time="2025-09-09T00:26:09.142236915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:26:09.142317 env[1214]: time="2025-09-09T00:26:09.142273833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:26:09.142317 env[1214]: time="2025-09-09T00:26:09.142283580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:26:09.142687 env[1214]: time="2025-09-09T00:26:09.142644247Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174 pid=1484 runtime=io.containerd.runc.v2 Sep 9 00:26:09.143650 env[1214]: time="2025-09-09T00:26:09.143592142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:26:09.143713 env[1214]: time="2025-09-09T00:26:09.143664586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:26:09.143713 env[1214]: time="2025-09-09T00:26:09.143701584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:26:09.143919 env[1214]: time="2025-09-09T00:26:09.143882514Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b2d390605273c3f9f2fbad7dae7a7741830123f678592e98d57b85eeabed2d6 pid=1487 runtime=io.containerd.runc.v2 Sep 9 00:26:09.161522 systemd[1]: Started cri-containerd-0b2d390605273c3f9f2fbad7dae7a7741830123f678592e98d57b85eeabed2d6.scope. Sep 9 00:26:09.166381 systemd[1]: Started cri-containerd-8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174.scope. Sep 9 00:26:09.197244 env[1214]: time="2025-09-09T00:26:09.197180825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xbzrl,Uid:dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\"" Sep 9 00:26:09.198411 env[1214]: time="2025-09-09T00:26:09.198376843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-znhcd,Uid:4a9589ef-fecf-4801-804c-8af60ac67f73,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b2d390605273c3f9f2fbad7dae7a7741830123f678592e98d57b85eeabed2d6\"" Sep 9 00:26:09.198509 kubelet[1418]: E0909 00:26:09.198450 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:09.198931 kubelet[1418]: E0909 00:26:09.198902 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:09.200767 env[1214]: time="2025-09-09T00:26:09.200732279Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:26:09.218250 kubelet[1418]: E0909 00:26:09.218190 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:09.354671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3414154873.mount: Deactivated successfully. Sep 9 00:26:10.218631 kubelet[1418]: E0909 00:26:10.218450 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:11.219666 kubelet[1418]: E0909 00:26:11.219579 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:12.220353 kubelet[1418]: E0909 00:26:12.220265 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:13.220831 kubelet[1418]: E0909 00:26:13.220792 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:14.221469 kubelet[1418]: E0909 00:26:14.221414 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:14.335524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771794793.mount: Deactivated successfully. Sep 9 00:26:15.221656 kubelet[1418]: E0909 00:26:15.221537 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:16.222399 kubelet[1418]: E0909 00:26:16.222356 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:16.805136 env[1214]: time="2025-09-09T00:26:16.804321503Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:16.808121 env[1214]: time="2025-09-09T00:26:16.808062613Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:16.810946 env[1214]: time="2025-09-09T00:26:16.810893234Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:16.811704 env[1214]: time="2025-09-09T00:26:16.811663425Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 00:26:16.815953 env[1214]: time="2025-09-09T00:26:16.815905623Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 00:26:16.822937 env[1214]: time="2025-09-09T00:26:16.822906675Z" level=info msg="CreateContainer within sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:26:16.850797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95676516.mount: Deactivated successfully. Sep 9 00:26:16.855859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1615141886.mount: Deactivated successfully. Sep 9 00:26:16.872460 env[1214]: time="2025-09-09T00:26:16.872413808Z" level=info msg="CreateContainer within sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e\"" Sep 9 00:26:16.873501 env[1214]: time="2025-09-09T00:26:16.873463201Z" level=info msg="StartContainer for \"220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e\"" Sep 9 00:26:16.898522 systemd[1]: Started cri-containerd-220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e.scope. Sep 9 00:26:16.944960 env[1214]: time="2025-09-09T00:26:16.944903659Z" level=info msg="StartContainer for \"220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e\" returns successfully" Sep 9 00:26:16.959739 systemd[1]: cri-containerd-220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e.scope: Deactivated successfully. Sep 9 00:26:17.117053 env[1214]: time="2025-09-09T00:26:17.116919961Z" level=info msg="shim disconnected" id=220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e Sep 9 00:26:17.117353 env[1214]: time="2025-09-09T00:26:17.117333826Z" level=warning msg="cleaning up after shim disconnected" id=220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e namespace=k8s.io Sep 9 00:26:17.117526 env[1214]: time="2025-09-09T00:26:17.117509696Z" level=info msg="cleaning up dead shim" Sep 9 00:26:17.124005 env[1214]: time="2025-09-09T00:26:17.123969958Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:26:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1603 runtime=io.containerd.runc.v2\n" Sep 9 00:26:17.222912 kubelet[1418]: E0909 00:26:17.222830 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:17.451170 kubelet[1418]: E0909 00:26:17.449259 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:17.456371 env[1214]: time="2025-09-09T00:26:17.456303945Z" level=info msg="CreateContainer within sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:26:17.529519 env[1214]: time="2025-09-09T00:26:17.529229694Z" level=info msg="CreateContainer within sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648\"" Sep 9 00:26:17.530471 env[1214]: time="2025-09-09T00:26:17.530230101Z" level=info msg="StartContainer for \"aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648\"" Sep 9 00:26:17.546898 systemd[1]: Started cri-containerd-aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648.scope. Sep 9 00:26:17.585981 env[1214]: time="2025-09-09T00:26:17.585928991Z" level=info msg="StartContainer for \"aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648\" returns successfully" Sep 9 00:26:17.595591 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:26:17.595822 systemd[1]: Stopped systemd-sysctl.service. Sep 9 00:26:17.596004 systemd[1]: Stopping systemd-sysctl.service... Sep 9 00:26:17.599016 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:26:17.599250 systemd[1]: cri-containerd-aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648.scope: Deactivated successfully. Sep 9 00:26:17.606533 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:26:17.640501 env[1214]: time="2025-09-09T00:26:17.640426212Z" level=info msg="shim disconnected" id=aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648 Sep 9 00:26:17.640501 env[1214]: time="2025-09-09T00:26:17.640482307Z" level=warning msg="cleaning up after shim disconnected" id=aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648 namespace=k8s.io Sep 9 00:26:17.640501 env[1214]: time="2025-09-09T00:26:17.640493287Z" level=info msg="cleaning up dead shim" Sep 9 00:26:17.648074 env[1214]: time="2025-09-09T00:26:17.648017197Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:26:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1667 runtime=io.containerd.runc.v2\n" Sep 9 00:26:17.848512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e-rootfs.mount: Deactivated successfully. Sep 9 00:26:18.147480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328632091.mount: Deactivated successfully. Sep 9 00:26:18.223326 kubelet[1418]: E0909 00:26:18.223237 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:18.453762 kubelet[1418]: E0909 00:26:18.453535 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:18.464541 env[1214]: time="2025-09-09T00:26:18.464316394Z" level=info msg="CreateContainer within sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:26:18.485607 env[1214]: time="2025-09-09T00:26:18.485561788Z" level=info msg="CreateContainer within sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a\"" Sep 9 00:26:18.486297 env[1214]: time="2025-09-09T00:26:18.486260843Z" level=info msg="StartContainer for \"5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a\"" Sep 9 00:26:18.509780 systemd[1]: Started cri-containerd-5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a.scope. Sep 9 00:26:18.548506 systemd[1]: cri-containerd-5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a.scope: Deactivated successfully. Sep 9 00:26:18.548674 env[1214]: time="2025-09-09T00:26:18.548589732Z" level=info msg="StartContainer for \"5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a\" returns successfully" Sep 9 00:26:18.700020 env[1214]: time="2025-09-09T00:26:18.699978199Z" level=info msg="shim disconnected" id=5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a Sep 9 00:26:18.700242 env[1214]: time="2025-09-09T00:26:18.700225074Z" level=warning msg="cleaning up after shim disconnected" id=5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a namespace=k8s.io Sep 9 00:26:18.700301 env[1214]: time="2025-09-09T00:26:18.700289449Z" level=info msg="cleaning up dead shim" Sep 9 00:26:18.707443 env[1214]: time="2025-09-09T00:26:18.707079685Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:18.709527 env[1214]: time="2025-09-09T00:26:18.709497923Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:18.712395 env[1214]: time="2025-09-09T00:26:18.712359515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:18.713862 env[1214]: time="2025-09-09T00:26:18.713834139Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:26:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1727 runtime=io.containerd.runc.v2\n" Sep 9 00:26:18.714385 env[1214]: time="2025-09-09T00:26:18.714358320Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:18.715082 env[1214]: time="2025-09-09T00:26:18.715051864Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 9 00:26:18.720503 env[1214]: time="2025-09-09T00:26:18.720433647Z" level=info msg="CreateContainer within sandbox \"0b2d390605273c3f9f2fbad7dae7a7741830123f678592e98d57b85eeabed2d6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:26:18.736138 env[1214]: time="2025-09-09T00:26:18.736077618Z" level=info msg="CreateContainer within sandbox \"0b2d390605273c3f9f2fbad7dae7a7741830123f678592e98d57b85eeabed2d6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6be9f7735029c5dc95cf1e1bdecd0af9f92ff6312a408c643c6346756e1b3edf\"" Sep 9 00:26:18.736783 env[1214]: time="2025-09-09T00:26:18.736752912Z" level=info msg="StartContainer for \"6be9f7735029c5dc95cf1e1bdecd0af9f92ff6312a408c643c6346756e1b3edf\"" Sep 9 00:26:18.754839 systemd[1]: Started cri-containerd-6be9f7735029c5dc95cf1e1bdecd0af9f92ff6312a408c643c6346756e1b3edf.scope. Sep 9 00:26:18.792738 env[1214]: time="2025-09-09T00:26:18.791754046Z" level=info msg="StartContainer for \"6be9f7735029c5dc95cf1e1bdecd0af9f92ff6312a408c643c6346756e1b3edf\" returns successfully" Sep 9 00:26:18.850242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a-rootfs.mount: Deactivated successfully. Sep 9 00:26:19.223581 kubelet[1418]: E0909 00:26:19.223541 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:19.457530 kubelet[1418]: E0909 00:26:19.457154 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:19.468706 kubelet[1418]: E0909 00:26:19.467717 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:19.476169 env[1214]: time="2025-09-09T00:26:19.475231689Z" level=info msg="CreateContainer within sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:26:19.485265 kubelet[1418]: I0909 00:26:19.485177 1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-znhcd" podStartSLOduration=3.969613119 podStartE2EDuration="13.485154428s" podCreationTimestamp="2025-09-09 00:26:06 +0000 UTC" firstStartedPulling="2025-09-09 00:26:09.200389355 +0000 UTC m=+5.245944826" lastFinishedPulling="2025-09-09 00:26:18.715930664 +0000 UTC m=+14.761486135" observedRunningTime="2025-09-09 00:26:19.484989344 +0000 UTC m=+15.530544815" watchObservedRunningTime="2025-09-09 00:26:19.485154428 +0000 UTC m=+15.530709898" Sep 9 00:26:19.497370 env[1214]: time="2025-09-09T00:26:19.497314839Z" level=info msg="CreateContainer within sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba\"" Sep 9 00:26:19.497828 env[1214]: time="2025-09-09T00:26:19.497796829Z" level=info msg="StartContainer for \"fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba\"" Sep 9 00:26:19.517804 systemd[1]: Started cri-containerd-fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba.scope. Sep 9 00:26:19.547101 systemd[1]: cri-containerd-fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba.scope: Deactivated successfully. Sep 9 00:26:19.549579 env[1214]: time="2025-09-09T00:26:19.549538673Z" level=info msg="StartContainer for \"fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba\" returns successfully" Sep 9 00:26:19.581024 env[1214]: time="2025-09-09T00:26:19.580979892Z" level=info msg="shim disconnected" id=fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba Sep 9 00:26:19.581024 env[1214]: time="2025-09-09T00:26:19.581025027Z" level=warning msg="cleaning up after shim disconnected" id=fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba namespace=k8s.io Sep 9 00:26:19.581235 env[1214]: time="2025-09-09T00:26:19.581034973Z" level=info msg="cleaning up dead shim" Sep 9 00:26:19.587536 env[1214]: time="2025-09-09T00:26:19.587488125Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:26:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1953 runtime=io.containerd.runc.v2\n" Sep 9 00:26:19.847971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba-rootfs.mount: Deactivated successfully. Sep 9 00:26:20.224348 kubelet[1418]: E0909 00:26:20.223985 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:20.471147 kubelet[1418]: E0909 00:26:20.471101 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:20.471880 kubelet[1418]: E0909 00:26:20.471862 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:20.477357 env[1214]: time="2025-09-09T00:26:20.477012794Z" level=info msg="CreateContainer within sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:26:20.492489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081814768.mount: Deactivated successfully. Sep 9 00:26:20.506553 env[1214]: time="2025-09-09T00:26:20.506502340Z" level=info msg="CreateContainer within sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\"" Sep 9 00:26:20.507497 env[1214]: time="2025-09-09T00:26:20.507466611Z" level=info msg="StartContainer for \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\"" Sep 9 00:26:20.527672 systemd[1]: Started cri-containerd-eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800.scope. Sep 9 00:26:20.573727 env[1214]: time="2025-09-09T00:26:20.568882407Z" level=info msg="StartContainer for \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\" returns successfully" Sep 9 00:26:20.670686 kubelet[1418]: I0909 00:26:20.670621 1418 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:26:20.709718 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 9 00:26:20.957739 kernel: Initializing XFRM netlink socket Sep 9 00:26:20.959716 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 9 00:26:21.226761 kubelet[1418]: E0909 00:26:21.226613 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:21.478854 kubelet[1418]: E0909 00:26:21.478756 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:22.227279 kubelet[1418]: E0909 00:26:22.227207 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:22.480669 kubelet[1418]: E0909 00:26:22.480575 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:22.586774 systemd-networkd[1039]: cilium_host: Link UP Sep 9 00:26:22.587655 systemd-networkd[1039]: cilium_net: Link UP Sep 9 00:26:22.589059 systemd-networkd[1039]: cilium_net: Gained carrier Sep 9 00:26:22.591220 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 9 00:26:22.591341 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 9 00:26:22.591449 systemd-networkd[1039]: cilium_host: Gained carrier Sep 9 00:26:22.591589 systemd-networkd[1039]: cilium_net: Gained IPv6LL Sep 9 00:26:22.591772 systemd-networkd[1039]: cilium_host: Gained IPv6LL Sep 9 00:26:22.671027 systemd-networkd[1039]: cilium_vxlan: Link UP Sep 9 00:26:22.671034 systemd-networkd[1039]: cilium_vxlan: Gained carrier Sep 9 00:26:22.914779 kernel: NET: Registered PF_ALG protocol family Sep 9 00:26:22.923419 kubelet[1418]: I0909 00:26:22.923366 1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xbzrl" podStartSLOduration=9.309714534 podStartE2EDuration="16.923333488s" podCreationTimestamp="2025-09-09 00:26:06 +0000 UTC" firstStartedPulling="2025-09-09 00:26:09.200312456 +0000 UTC m=+5.245867926" lastFinishedPulling="2025-09-09 00:26:16.813931409 +0000 UTC m=+12.859486880" observedRunningTime="2025-09-09 00:26:21.502089451 +0000 UTC m=+17.547644922" watchObservedRunningTime="2025-09-09 00:26:22.923333488 +0000 UTC m=+18.968888959" Sep 9 00:26:22.939491 systemd[1]: Created slice kubepods-besteffort-pod7a05e55c_c9b2_4d1c_98a9_2cbfa2697af4.slice. Sep 9 00:26:23.047750 kubelet[1418]: I0909 00:26:23.047707 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncj95\" (UniqueName: \"kubernetes.io/projected/7a05e55c-c9b2-4d1c-98a9-2cbfa2697af4-kube-api-access-ncj95\") pod \"nginx-deployment-7fcdb87857-qc9db\" (UID: \"7a05e55c-c9b2-4d1c-98a9-2cbfa2697af4\") " pod="default/nginx-deployment-7fcdb87857-qc9db" Sep 9 00:26:23.228313 kubelet[1418]: E0909 00:26:23.228203 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:23.242372 env[1214]: time="2025-09-09T00:26:23.242321375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qc9db,Uid:7a05e55c-c9b2-4d1c-98a9-2cbfa2697af4,Namespace:default,Attempt:0,}" Sep 9 00:26:23.481890 kubelet[1418]: E0909 00:26:23.481794 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:23.541887 systemd-networkd[1039]: lxc_health: Link UP Sep 9 00:26:23.549105 systemd-networkd[1039]: lxc_health: Gained carrier Sep 9 00:26:23.549717 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 9 00:26:23.777340 systemd-networkd[1039]: lxc47adc080988e: Link UP Sep 9 00:26:23.787722 kernel: eth0: renamed from tmp2da97 Sep 9 00:26:23.794712 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:26:23.794779 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc47adc080988e: link becomes ready Sep 9 00:26:23.795480 systemd-networkd[1039]: lxc47adc080988e: Gained carrier Sep 9 00:26:23.901870 systemd-networkd[1039]: cilium_vxlan: Gained IPv6LL Sep 9 00:26:24.229098 kubelet[1418]: E0909 00:26:24.228956 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:24.561554 kubelet[1418]: E0909 00:26:24.561494 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:25.181894 systemd-networkd[1039]: lxc_health: Gained IPv6LL Sep 9 00:26:25.229278 kubelet[1418]: E0909 00:26:25.229222 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:25.693867 systemd-networkd[1039]: lxc47adc080988e: Gained IPv6LL Sep 9 00:26:26.216054 kubelet[1418]: E0909 00:26:26.216017 1418 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:26.229575 kubelet[1418]: E0909 00:26:26.229536 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:27.230627 kubelet[1418]: E0909 00:26:27.230572 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:27.338518 env[1214]: time="2025-09-09T00:26:27.338429321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:26:27.338518 env[1214]: time="2025-09-09T00:26:27.338514199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:26:27.338878 env[1214]: time="2025-09-09T00:26:27.338551701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:26:27.341975 env[1214]: time="2025-09-09T00:26:27.338774751Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2da97410cb70aae6d0b3a5c989f844856bbf778ffda8e0498e1aa5b20d1523e1 pid=2492 runtime=io.containerd.runc.v2 Sep 9 00:26:27.356377 systemd[1]: run-containerd-runc-k8s.io-2da97410cb70aae6d0b3a5c989f844856bbf778ffda8e0498e1aa5b20d1523e1-runc.f79Nw4.mount: Deactivated successfully. Sep 9 00:26:27.358790 systemd[1]: Started cri-containerd-2da97410cb70aae6d0b3a5c989f844856bbf778ffda8e0498e1aa5b20d1523e1.scope. Sep 9 00:26:27.377526 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:26:27.394035 env[1214]: time="2025-09-09T00:26:27.393992864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qc9db,Uid:7a05e55c-c9b2-4d1c-98a9-2cbfa2697af4,Namespace:default,Attempt:0,} returns sandbox id \"2da97410cb70aae6d0b3a5c989f844856bbf778ffda8e0498e1aa5b20d1523e1\"" Sep 9 00:26:27.395669 env[1214]: time="2025-09-09T00:26:27.395639294Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 9 00:26:28.231341 kubelet[1418]: E0909 00:26:28.231264 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:29.232311 kubelet[1418]: E0909 00:26:29.232250 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:30.233289 kubelet[1418]: E0909 00:26:30.233225 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:31.233566 kubelet[1418]: E0909 00:26:31.233513 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:31.685739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056607158.mount: Deactivated successfully. Sep 9 00:26:32.233890 kubelet[1418]: E0909 00:26:32.233838 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:32.967938 env[1214]: time="2025-09-09T00:26:32.967538322Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:32.971338 env[1214]: time="2025-09-09T00:26:32.970569619Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:32.973024 env[1214]: time="2025-09-09T00:26:32.972969249Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:32.975659 env[1214]: time="2025-09-09T00:26:32.975589995Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:32.976348 env[1214]: time="2025-09-09T00:26:32.976285060Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 9 00:26:32.982337 env[1214]: time="2025-09-09T00:26:32.982282535Z" level=info msg="CreateContainer within sandbox \"2da97410cb70aae6d0b3a5c989f844856bbf778ffda8e0498e1aa5b20d1523e1\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 9 00:26:32.996558 env[1214]: time="2025-09-09T00:26:32.996493600Z" level=info msg="CreateContainer within sandbox \"2da97410cb70aae6d0b3a5c989f844856bbf778ffda8e0498e1aa5b20d1523e1\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"30851873de51580309f74138f879ee2c522f8c97fc60aa4acfcb12d21454ec3c\"" Sep 9 00:26:32.997048 env[1214]: time="2025-09-09T00:26:32.996956230Z" level=info msg="StartContainer for \"30851873de51580309f74138f879ee2c522f8c97fc60aa4acfcb12d21454ec3c\"" Sep 9 00:26:33.015677 systemd[1]: Started cri-containerd-30851873de51580309f74138f879ee2c522f8c97fc60aa4acfcb12d21454ec3c.scope. Sep 9 00:26:33.046960 env[1214]: time="2025-09-09T00:26:33.046914844Z" level=info msg="StartContainer for \"30851873de51580309f74138f879ee2c522f8c97fc60aa4acfcb12d21454ec3c\" returns successfully" Sep 9 00:26:33.234793 kubelet[1418]: E0909 00:26:33.234260 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:33.924993 kubelet[1418]: I0909 00:26:33.924909 1418 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:26:33.925977 kubelet[1418]: E0909 00:26:33.925854 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:33.952395 kubelet[1418]: I0909 00:26:33.952229 1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-qc9db" podStartSLOduration=6.369694468 podStartE2EDuration="11.952214289s" podCreationTimestamp="2025-09-09 00:26:22 +0000 UTC" firstStartedPulling="2025-09-09 00:26:27.394988934 +0000 UTC m=+23.440544405" lastFinishedPulling="2025-09-09 00:26:32.977508755 +0000 UTC m=+29.023064226" observedRunningTime="2025-09-09 00:26:33.520712971 +0000 UTC m=+29.566268442" watchObservedRunningTime="2025-09-09 00:26:33.952214289 +0000 UTC m=+29.997769720" Sep 9 00:26:34.234979 kubelet[1418]: E0909 00:26:34.234576 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:34.511764 kubelet[1418]: E0909 00:26:34.511459 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:35.236210 kubelet[1418]: E0909 00:26:35.236117 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:36.236685 kubelet[1418]: E0909 00:26:36.236647 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:37.237889 kubelet[1418]: E0909 00:26:37.237839 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:38.238252 kubelet[1418]: E0909 00:26:38.238185 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:39.239089 kubelet[1418]: E0909 00:26:39.239004 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:39.623407 systemd[1]: Created slice kubepods-besteffort-pod2281e6f6_1338_419f_ac0a_de7b8f6fbef3.slice. Sep 9 00:26:39.648637 kubelet[1418]: I0909 00:26:39.648577 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/2281e6f6-1338-419f-ac0a-de7b8f6fbef3-data\") pod \"nfs-server-provisioner-0\" (UID: \"2281e6f6-1338-419f-ac0a-de7b8f6fbef3\") " pod="default/nfs-server-provisioner-0" Sep 9 00:26:39.648637 kubelet[1418]: I0909 00:26:39.648638 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt9qf\" (UniqueName: \"kubernetes.io/projected/2281e6f6-1338-419f-ac0a-de7b8f6fbef3-kube-api-access-rt9qf\") pod \"nfs-server-provisioner-0\" (UID: \"2281e6f6-1338-419f-ac0a-de7b8f6fbef3\") " pod="default/nfs-server-provisioner-0" Sep 9 00:26:39.929809 env[1214]: time="2025-09-09T00:26:39.929415322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2281e6f6-1338-419f-ac0a-de7b8f6fbef3,Namespace:default,Attempt:0,}" Sep 9 00:26:39.963425 systemd-networkd[1039]: lxcb6db516821c1: Link UP Sep 9 00:26:39.972727 kernel: eth0: renamed from tmpbd503 Sep 9 00:26:39.980904 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:26:39.981062 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb6db516821c1: link becomes ready Sep 9 00:26:39.982955 systemd-networkd[1039]: lxcb6db516821c1: Gained carrier Sep 9 00:26:40.126229 env[1214]: time="2025-09-09T00:26:40.126147063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:26:40.126229 env[1214]: time="2025-09-09T00:26:40.126188903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:26:40.126229 env[1214]: time="2025-09-09T00:26:40.126200663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:26:40.126419 env[1214]: time="2025-09-09T00:26:40.126372020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd503338c8eef75b8321a82da5f07fed9627f12f76d283abefa554a56c2547f3 pid=2622 runtime=io.containerd.runc.v2 Sep 9 00:26:40.140285 systemd[1]: Started cri-containerd-bd503338c8eef75b8321a82da5f07fed9627f12f76d283abefa554a56c2547f3.scope. Sep 9 00:26:40.165037 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:26:40.179513 env[1214]: time="2025-09-09T00:26:40.179473626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2281e6f6-1338-419f-ac0a-de7b8f6fbef3,Namespace:default,Attempt:0,} returns sandbox id \"bd503338c8eef75b8321a82da5f07fed9627f12f76d283abefa554a56c2547f3\"" Sep 9 00:26:40.181457 env[1214]: time="2025-09-09T00:26:40.181266042Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 9 00:26:40.239172 kubelet[1418]: E0909 00:26:40.239127 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:40.761583 systemd[1]: run-containerd-runc-k8s.io-bd503338c8eef75b8321a82da5f07fed9627f12f76d283abefa554a56c2547f3-runc.EGUa4j.mount: Deactivated successfully. Sep 9 00:26:41.240058 kubelet[1418]: E0909 00:26:41.239816 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:41.437930 systemd-networkd[1039]: lxcb6db516821c1: Gained IPv6LL Sep 9 00:26:41.993029 update_engine[1205]: I0909 00:26:41.992978 1205 update_attempter.cc:509] Updating boot flags... Sep 9 00:26:42.240463 kubelet[1418]: E0909 00:26:42.240408 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:42.349779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2562715151.mount: Deactivated successfully. Sep 9 00:26:43.241587 kubelet[1418]: E0909 00:26:43.241523 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:44.173087 env[1214]: time="2025-09-09T00:26:44.173014474Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:44.176676 env[1214]: time="2025-09-09T00:26:44.176549156Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:44.179119 env[1214]: time="2025-09-09T00:26:44.179084208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:44.182198 env[1214]: time="2025-09-09T00:26:44.182113414Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:44.183726 env[1214]: time="2025-09-09T00:26:44.182849206Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Sep 9 00:26:44.194602 env[1214]: time="2025-09-09T00:26:44.194553958Z" level=info msg="CreateContainer within sandbox \"bd503338c8eef75b8321a82da5f07fed9627f12f76d283abefa554a56c2547f3\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 9 00:26:44.206205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540727262.mount: Deactivated successfully. Sep 9 00:26:44.210902 env[1214]: time="2025-09-09T00:26:44.210856539Z" level=info msg="CreateContainer within sandbox \"bd503338c8eef75b8321a82da5f07fed9627f12f76d283abefa554a56c2547f3\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"7a7c38a40de2b0a4bd7c5f4159d445d873fae0f7369b8933d7a303e67ea7cd8e\"" Sep 9 00:26:44.211354 env[1214]: time="2025-09-09T00:26:44.211331814Z" level=info msg="StartContainer for \"7a7c38a40de2b0a4bd7c5f4159d445d873fae0f7369b8933d7a303e67ea7cd8e\"" Sep 9 00:26:44.242041 kubelet[1418]: E0909 00:26:44.241990 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:44.245567 systemd[1]: Started cri-containerd-7a7c38a40de2b0a4bd7c5f4159d445d873fae0f7369b8933d7a303e67ea7cd8e.scope. Sep 9 00:26:44.278387 env[1214]: time="2025-09-09T00:26:44.278339038Z" level=info msg="StartContainer for \"7a7c38a40de2b0a4bd7c5f4159d445d873fae0f7369b8933d7a303e67ea7cd8e\" returns successfully" Sep 9 00:26:44.547540 kubelet[1418]: I0909 00:26:44.547479 1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.544109419 podStartE2EDuration="5.547464603s" podCreationTimestamp="2025-09-09 00:26:39 +0000 UTC" firstStartedPulling="2025-09-09 00:26:40.180792408 +0000 UTC m=+36.226347879" lastFinishedPulling="2025-09-09 00:26:44.184147592 +0000 UTC m=+40.229703063" observedRunningTime="2025-09-09 00:26:44.546325615 +0000 UTC m=+40.591881086" watchObservedRunningTime="2025-09-09 00:26:44.547464603 +0000 UTC m=+40.593020074" Sep 9 00:26:45.243286 kubelet[1418]: E0909 00:26:45.243243 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:46.216524 kubelet[1418]: E0909 00:26:46.216408 1418 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:46.244300 kubelet[1418]: E0909 00:26:46.244227 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:47.245137 kubelet[1418]: E0909 00:26:47.245100 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:48.246256 kubelet[1418]: E0909 00:26:48.246216 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:49.247202 kubelet[1418]: E0909 00:26:49.247157 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:49.610623 systemd[1]: Created slice kubepods-besteffort-pode8bd8667_0f57_43ee_aac9_031b9fe72f03.slice. Sep 9 00:26:49.716406 kubelet[1418]: I0909 00:26:49.716356 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-becd50af-713f-4273-908a-f5895e63d4ad\" (UniqueName: \"kubernetes.io/nfs/e8bd8667-0f57-43ee-aac9-031b9fe72f03-pvc-becd50af-713f-4273-908a-f5895e63d4ad\") pod \"test-pod-1\" (UID: \"e8bd8667-0f57-43ee-aac9-031b9fe72f03\") " pod="default/test-pod-1" Sep 9 00:26:49.716406 kubelet[1418]: I0909 00:26:49.716399 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phxwp\" (UniqueName: \"kubernetes.io/projected/e8bd8667-0f57-43ee-aac9-031b9fe72f03-kube-api-access-phxwp\") pod \"test-pod-1\" (UID: \"e8bd8667-0f57-43ee-aac9-031b9fe72f03\") " pod="default/test-pod-1" Sep 9 00:26:49.838716 kernel: FS-Cache: Loaded Sep 9 00:26:49.866770 kernel: RPC: Registered named UNIX socket transport module. Sep 9 00:26:49.866903 kernel: RPC: Registered udp transport module. Sep 9 00:26:49.866926 kernel: RPC: Registered tcp transport module. Sep 9 00:26:49.866945 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 9 00:26:49.908737 kernel: FS-Cache: Netfs 'nfs' registered for caching Sep 9 00:26:50.041749 kernel: NFS: Registering the id_resolver key type Sep 9 00:26:50.041874 kernel: Key type id_resolver registered Sep 9 00:26:50.041915 kernel: Key type id_legacy registered Sep 9 00:26:50.065665 nfsidmap[2750]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 9 00:26:50.068967 nfsidmap[2753]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 9 00:26:50.214646 env[1214]: time="2025-09-09T00:26:50.214167233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e8bd8667-0f57-43ee-aac9-031b9fe72f03,Namespace:default,Attempt:0,}" Sep 9 00:26:50.247796 kubelet[1418]: E0909 00:26:50.247759 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:50.249571 systemd-networkd[1039]: lxc3b8eceb06be4: Link UP Sep 9 00:26:50.264125 kernel: eth0: renamed from tmp4d82b Sep 9 00:26:50.271598 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:26:50.271801 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3b8eceb06be4: link becomes ready Sep 9 00:26:50.271720 systemd-networkd[1039]: lxc3b8eceb06be4: Gained carrier Sep 9 00:26:50.416508 env[1214]: time="2025-09-09T00:26:50.416438797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:26:50.416640 env[1214]: time="2025-09-09T00:26:50.416522556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:26:50.416640 env[1214]: time="2025-09-09T00:26:50.416553636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:26:50.416838 env[1214]: time="2025-09-09T00:26:50.416797514Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d82b3555a3a1123cdb6d936e044cac77aea400ea0c35dfc400b278947af9587 pid=2787 runtime=io.containerd.runc.v2 Sep 9 00:26:50.426818 systemd[1]: Started cri-containerd-4d82b3555a3a1123cdb6d936e044cac77aea400ea0c35dfc400b278947af9587.scope. Sep 9 00:26:50.446332 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:26:50.462468 env[1214]: time="2025-09-09T00:26:50.462419736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e8bd8667-0f57-43ee-aac9-031b9fe72f03,Namespace:default,Attempt:0,} returns sandbox id \"4d82b3555a3a1123cdb6d936e044cac77aea400ea0c35dfc400b278947af9587\"" Sep 9 00:26:50.463877 env[1214]: time="2025-09-09T00:26:50.463839684Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 9 00:26:50.677882 env[1214]: time="2025-09-09T00:26:50.677819471Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:50.680087 env[1214]: time="2025-09-09T00:26:50.680049453Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:50.683053 env[1214]: time="2025-09-09T00:26:50.683018028Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:50.684830 env[1214]: time="2025-09-09T00:26:50.684801213Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:26:50.685529 env[1214]: time="2025-09-09T00:26:50.685502007Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 9 00:26:50.690022 env[1214]: time="2025-09-09T00:26:50.689987570Z" level=info msg="CreateContainer within sandbox \"4d82b3555a3a1123cdb6d936e044cac77aea400ea0c35dfc400b278947af9587\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 9 00:26:50.702239 env[1214]: time="2025-09-09T00:26:50.702186349Z" level=info msg="CreateContainer within sandbox \"4d82b3555a3a1123cdb6d936e044cac77aea400ea0c35dfc400b278947af9587\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c8ca36aec180ddb096e3b037f8269161e6513a0d6353511f83429e8569df43cc\"" Sep 9 00:26:50.702718 env[1214]: time="2025-09-09T00:26:50.702687705Z" level=info msg="StartContainer for \"c8ca36aec180ddb096e3b037f8269161e6513a0d6353511f83429e8569df43cc\"" Sep 9 00:26:50.719111 systemd[1]: Started cri-containerd-c8ca36aec180ddb096e3b037f8269161e6513a0d6353511f83429e8569df43cc.scope. Sep 9 00:26:50.755862 env[1214]: time="2025-09-09T00:26:50.755577507Z" level=info msg="StartContainer for \"c8ca36aec180ddb096e3b037f8269161e6513a0d6353511f83429e8569df43cc\" returns successfully" Sep 9 00:26:51.247998 kubelet[1418]: E0909 00:26:51.247923 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:51.560994 kubelet[1418]: I0909 00:26:51.560869 1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=12.337254641 podStartE2EDuration="12.560853748s" podCreationTimestamp="2025-09-09 00:26:39 +0000 UTC" firstStartedPulling="2025-09-09 00:26:50.463286289 +0000 UTC m=+46.508841720" lastFinishedPulling="2025-09-09 00:26:50.686885356 +0000 UTC m=+46.732440827" observedRunningTime="2025-09-09 00:26:51.560214833 +0000 UTC m=+47.605770264" watchObservedRunningTime="2025-09-09 00:26:51.560853748 +0000 UTC m=+47.606409219" Sep 9 00:26:52.248631 kubelet[1418]: E0909 00:26:52.248580 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:52.253907 systemd-networkd[1039]: lxc3b8eceb06be4: Gained IPv6LL Sep 9 00:26:53.248963 kubelet[1418]: E0909 00:26:53.248898 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:54.250080 kubelet[1418]: E0909 00:26:54.250018 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:55.250604 kubelet[1418]: E0909 00:26:55.250564 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:56.251179 kubelet[1418]: E0909 00:26:56.251137 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:57.251697 kubelet[1418]: E0909 00:26:57.251650 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:58.253177 kubelet[1418]: E0909 00:26:58.253125 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:58.770575 systemd[1]: run-containerd-runc-k8s.io-eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800-runc.DE5EMM.mount: Deactivated successfully. Sep 9 00:26:58.802755 env[1214]: time="2025-09-09T00:26:58.802642906Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:26:58.810065 env[1214]: time="2025-09-09T00:26:58.810006062Z" level=info msg="StopContainer for \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\" with timeout 2 (s)" Sep 9 00:26:58.810359 env[1214]: time="2025-09-09T00:26:58.810332260Z" level=info msg="Stop container \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\" with signal terminated" Sep 9 00:26:58.817417 systemd-networkd[1039]: lxc_health: Link DOWN Sep 9 00:26:58.817423 systemd-networkd[1039]: lxc_health: Lost carrier Sep 9 00:26:58.867214 systemd[1]: cri-containerd-eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800.scope: Deactivated successfully. Sep 9 00:26:58.867533 systemd[1]: cri-containerd-eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800.scope: Consumed 6.210s CPU time. Sep 9 00:26:58.890247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800-rootfs.mount: Deactivated successfully. Sep 9 00:26:58.906256 env[1214]: time="2025-09-09T00:26:58.906209286Z" level=info msg="shim disconnected" id=eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800 Sep 9 00:26:58.906256 env[1214]: time="2025-09-09T00:26:58.906255406Z" level=warning msg="cleaning up after shim disconnected" id=eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800 namespace=k8s.io Sep 9 00:26:58.906503 env[1214]: time="2025-09-09T00:26:58.906276045Z" level=info msg="cleaning up dead shim" Sep 9 00:26:58.913880 env[1214]: time="2025-09-09T00:26:58.913826680Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:26:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2919 runtime=io.containerd.runc.v2\n" Sep 9 00:26:58.918599 env[1214]: time="2025-09-09T00:26:58.918474852Z" level=info msg="StopContainer for \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\" returns successfully" Sep 9 00:26:58.919188 env[1214]: time="2025-09-09T00:26:58.919158768Z" level=info msg="StopPodSandbox for \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\"" Sep 9 00:26:58.919412 env[1214]: time="2025-09-09T00:26:58.919389327Z" level=info msg="Container to stop \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:26:58.919484 env[1214]: time="2025-09-09T00:26:58.919467246Z" level=info msg="Container to stop \"fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:26:58.919602 env[1214]: time="2025-09-09T00:26:58.919580046Z" level=info msg="Container to stop \"5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:26:58.919682 env[1214]: time="2025-09-09T00:26:58.919664965Z" level=info msg="Container to stop \"220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:26:58.919768 env[1214]: time="2025-09-09T00:26:58.919750685Z" level=info msg="Container to stop \"aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:26:58.921451 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174-shm.mount: Deactivated successfully. Sep 9 00:26:58.929716 systemd[1]: cri-containerd-8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174.scope: Deactivated successfully. Sep 9 00:26:58.954389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174-rootfs.mount: Deactivated successfully. Sep 9 00:26:58.963198 env[1214]: time="2025-09-09T00:26:58.963146065Z" level=info msg="shim disconnected" id=8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174 Sep 9 00:26:58.963473 env[1214]: time="2025-09-09T00:26:58.963450183Z" level=warning msg="cleaning up after shim disconnected" id=8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174 namespace=k8s.io Sep 9 00:26:58.963543 env[1214]: time="2025-09-09T00:26:58.963529663Z" level=info msg="cleaning up dead shim" Sep 9 00:26:58.970987 env[1214]: time="2025-09-09T00:26:58.970940138Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:26:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2950 runtime=io.containerd.runc.v2\n" Sep 9 00:26:58.971463 env[1214]: time="2025-09-09T00:26:58.971433175Z" level=info msg="TearDown network for sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" successfully" Sep 9 00:26:58.971560 env[1214]: time="2025-09-09T00:26:58.971541775Z" level=info msg="StopPodSandbox for \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" returns successfully" Sep 9 00:26:59.170859 kubelet[1418]: I0909 00:26:59.169812 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-hubble-tls\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.170859 kubelet[1418]: I0909 00:26:59.170126 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.170859 kubelet[1418]: I0909 00:26:59.169869 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-lib-modules\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.170859 kubelet[1418]: I0909 00:26:59.170253 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-xtables-lock\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.170859 kubelet[1418]: I0909 00:26:59.170271 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-host-proc-sys-kernel\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.170859 kubelet[1418]: I0909 00:26:59.170292 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-862hm\" (UniqueName: \"kubernetes.io/projected/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-kube-api-access-862hm\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.171116 kubelet[1418]: I0909 00:26:59.170307 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cilium-run\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.171116 kubelet[1418]: I0909 00:26:59.170320 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-bpf-maps\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.171116 kubelet[1418]: I0909 00:26:59.170339 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cilium-config-path\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.171116 kubelet[1418]: I0909 00:26:59.170363 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-hostproc\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.171116 kubelet[1418]: I0909 00:26:59.170447 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.171116 kubelet[1418]: I0909 00:26:59.170467 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.171257 kubelet[1418]: I0909 00:26:59.170482 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.171257 kubelet[1418]: I0909 00:26:59.170495 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.171257 kubelet[1418]: I0909 00:26:59.170947 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-etc-cni-netd\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.171257 kubelet[1418]: I0909 00:26:59.170977 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-clustermesh-secrets\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.171257 kubelet[1418]: I0909 00:26:59.170993 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-host-proc-sys-net\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.171416 kubelet[1418]: I0909 00:26:59.171011 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cilium-cgroup\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.171416 kubelet[1418]: I0909 00:26:59.171025 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cni-path\") pod \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\" (UID: \"dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d\") " Sep 9 00:26:59.171416 kubelet[1418]: I0909 00:26:59.171072 1418 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-lib-modules\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.171416 kubelet[1418]: I0909 00:26:59.171081 1418 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-xtables-lock\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.171416 kubelet[1418]: I0909 00:26:59.171089 1418 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-host-proc-sys-kernel\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.171416 kubelet[1418]: I0909 00:26:59.171098 1418 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cilium-run\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.171416 kubelet[1418]: I0909 00:26:59.171107 1418 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-bpf-maps\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.171571 kubelet[1418]: I0909 00:26:59.171133 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cni-path" (OuterVolumeSpecName: "cni-path") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.171571 kubelet[1418]: I0909 00:26:59.171150 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-hostproc" (OuterVolumeSpecName: "hostproc") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.171571 kubelet[1418]: I0909 00:26:59.171163 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.172089 kubelet[1418]: I0909 00:26:59.171753 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.173401 kubelet[1418]: I0909 00:26:59.172141 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.175850 kubelet[1418]: I0909 00:26:59.175680 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:26:59.178462 kubelet[1418]: I0909 00:26:59.178431 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:26:59.179263 kubelet[1418]: I0909 00:26:59.179143 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:26:59.179263 kubelet[1418]: I0909 00:26:59.179246 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-kube-api-access-862hm" (OuterVolumeSpecName: "kube-api-access-862hm") pod "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" (UID: "dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d"). InnerVolumeSpecName "kube-api-access-862hm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:26:59.253986 kubelet[1418]: E0909 00:26:59.253940 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:26:59.272446 kubelet[1418]: I0909 00:26:59.272273 1418 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-etc-cni-netd\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.272446 kubelet[1418]: I0909 00:26:59.272312 1418 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-clustermesh-secrets\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.272446 kubelet[1418]: I0909 00:26:59.272324 1418 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-host-proc-sys-net\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.272446 kubelet[1418]: I0909 00:26:59.272333 1418 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cilium-cgroup\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.272446 kubelet[1418]: I0909 00:26:59.272341 1418 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cni-path\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.272446 kubelet[1418]: I0909 00:26:59.272348 1418 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-hubble-tls\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.272446 kubelet[1418]: I0909 00:26:59.272357 1418 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-862hm\" (UniqueName: \"kubernetes.io/projected/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-kube-api-access-862hm\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.272446 kubelet[1418]: I0909 00:26:59.272365 1418 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-cilium-config-path\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.273019 kubelet[1418]: I0909 00:26:59.272375 1418 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d-hostproc\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:26:59.573259 kubelet[1418]: I0909 00:26:59.573224 1418 scope.go:117] "RemoveContainer" containerID="eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800" Sep 9 00:26:59.576315 env[1214]: time="2025-09-09T00:26:59.576277074Z" level=info msg="RemoveContainer for \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\"" Sep 9 00:26:59.579271 env[1214]: time="2025-09-09T00:26:59.579228777Z" level=info msg="RemoveContainer for \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\" returns successfully" Sep 9 00:26:59.579886 kubelet[1418]: I0909 00:26:59.579852 1418 scope.go:117] "RemoveContainer" containerID="fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba" Sep 9 00:26:59.580544 systemd[1]: Removed slice kubepods-burstable-poddccfe4cd_8b2d_49e9_be67_7c671d8c1d4d.slice. Sep 9 00:26:59.580704 systemd[1]: kubepods-burstable-poddccfe4cd_8b2d_49e9_be67_7c671d8c1d4d.slice: Consumed 6.329s CPU time. Sep 9 00:26:59.582657 env[1214]: time="2025-09-09T00:26:59.582567198Z" level=info msg="RemoveContainer for \"fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba\"" Sep 9 00:26:59.587341 env[1214]: time="2025-09-09T00:26:59.587307091Z" level=info msg="RemoveContainer for \"fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba\" returns successfully" Sep 9 00:26:59.587555 kubelet[1418]: I0909 00:26:59.587536 1418 scope.go:117] "RemoveContainer" containerID="5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a" Sep 9 00:26:59.589213 env[1214]: time="2025-09-09T00:26:59.589180120Z" level=info msg="RemoveContainer for \"5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a\"" Sep 9 00:26:59.592729 env[1214]: time="2025-09-09T00:26:59.592677100Z" level=info msg="RemoveContainer for \"5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a\" returns successfully" Sep 9 00:26:59.592990 kubelet[1418]: I0909 00:26:59.592942 1418 scope.go:117] "RemoveContainer" containerID="aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648" Sep 9 00:26:59.594098 env[1214]: time="2025-09-09T00:26:59.594064932Z" level=info msg="RemoveContainer for \"aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648\"" Sep 9 00:26:59.603496 env[1214]: time="2025-09-09T00:26:59.603461397Z" level=info msg="RemoveContainer for \"aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648\" returns successfully" Sep 9 00:26:59.603847 kubelet[1418]: I0909 00:26:59.603786 1418 scope.go:117] "RemoveContainer" containerID="220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e" Sep 9 00:26:59.604977 env[1214]: time="2025-09-09T00:26:59.604953229Z" level=info msg="RemoveContainer for \"220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e\"" Sep 9 00:26:59.608467 env[1214]: time="2025-09-09T00:26:59.608433049Z" level=info msg="RemoveContainer for \"220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e\" returns successfully" Sep 9 00:26:59.608820 kubelet[1418]: I0909 00:26:59.608796 1418 scope.go:117] "RemoveContainer" containerID="eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800" Sep 9 00:26:59.609101 env[1214]: time="2025-09-09T00:26:59.609024805Z" level=error msg="ContainerStatus for \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\": not found" Sep 9 00:26:59.609340 kubelet[1418]: E0909 00:26:59.609317 1418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\": not found" containerID="eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800" Sep 9 00:26:59.609476 kubelet[1418]: I0909 00:26:59.609432 1418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800"} err="failed to get container status \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\": rpc error: code = NotFound desc = an error occurred when try to find container \"eef86c8c95166699d5e6d0ed5a362b29bd4ef87c58962569fc316f02a64b3800\": not found" Sep 9 00:26:59.609549 kubelet[1418]: I0909 00:26:59.609537 1418 scope.go:117] "RemoveContainer" containerID="fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba" Sep 9 00:26:59.609858 env[1214]: time="2025-09-09T00:26:59.609795841Z" level=error msg="ContainerStatus for \"fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba\": not found" Sep 9 00:26:59.610066 kubelet[1418]: E0909 00:26:59.610044 1418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba\": not found" containerID="fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba" Sep 9 00:26:59.610118 kubelet[1418]: I0909 00:26:59.610070 1418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba"} err="failed to get container status \"fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdecc9479a46b092a06ffebb58c88d79efdee7fe3580722331fa7e25864784ba\": not found" Sep 9 00:26:59.610118 kubelet[1418]: I0909 00:26:59.610096 1418 scope.go:117] "RemoveContainer" containerID="5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a" Sep 9 00:26:59.610326 env[1214]: time="2025-09-09T00:26:59.610284838Z" level=error msg="ContainerStatus for \"5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a\": not found" Sep 9 00:26:59.610552 kubelet[1418]: E0909 00:26:59.610531 1418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a\": not found" containerID="5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a" Sep 9 00:26:59.610668 kubelet[1418]: I0909 00:26:59.610648 1418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a"} err="failed to get container status \"5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e0cd6c61a57d9a72ae9208f19274d75eb18a705dce3fbe4ba937ced81177a9a\": not found" Sep 9 00:26:59.610751 kubelet[1418]: I0909 00:26:59.610738 1418 scope.go:117] "RemoveContainer" containerID="aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648" Sep 9 00:26:59.611058 env[1214]: time="2025-09-09T00:26:59.611006834Z" level=error msg="ContainerStatus for \"aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648\": not found" Sep 9 00:26:59.611326 kubelet[1418]: E0909 00:26:59.611231 1418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648\": not found" containerID="aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648" Sep 9 00:26:59.611390 kubelet[1418]: I0909 00:26:59.611342 1418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648"} err="failed to get container status \"aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa2f687763a63f215efd96678a1e7d6717561c5832712665949c5969f858a648\": not found" Sep 9 00:26:59.611390 kubelet[1418]: I0909 00:26:59.611359 1418 scope.go:117] "RemoveContainer" containerID="220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e" Sep 9 00:26:59.611610 env[1214]: time="2025-09-09T00:26:59.611567910Z" level=error msg="ContainerStatus for \"220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e\": not found" Sep 9 00:26:59.611861 kubelet[1418]: E0909 00:26:59.611807 1418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e\": not found" containerID="220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e" Sep 9 00:26:59.611993 kubelet[1418]: I0909 00:26:59.611971 1418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e"} err="failed to get container status \"220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"220ffc430c65b667595e2fb66c2cb991dd4beb971dd9940995ef4e7c54152e6e\": not found" Sep 9 00:26:59.767316 systemd[1]: var-lib-kubelet-pods-dccfe4cd\x2d8b2d\x2d49e9\x2dbe67\x2d7c671d8c1d4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d862hm.mount: Deactivated successfully. Sep 9 00:26:59.767419 systemd[1]: var-lib-kubelet-pods-dccfe4cd\x2d8b2d\x2d49e9\x2dbe67\x2d7c671d8c1d4d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:26:59.767480 systemd[1]: var-lib-kubelet-pods-dccfe4cd\x2d8b2d\x2d49e9\x2dbe67\x2d7c671d8c1d4d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:27:00.254477 kubelet[1418]: E0909 00:27:00.254372 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:00.415344 kubelet[1418]: I0909 00:27:00.415311 1418 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d" path="/var/lib/kubelet/pods/dccfe4cd-8b2d-49e9-be67-7c671d8c1d4d/volumes" Sep 9 00:27:01.254939 kubelet[1418]: E0909 00:27:01.254871 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:01.386519 kubelet[1418]: E0909 00:27:01.386467 1418 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:27:02.255404 kubelet[1418]: E0909 00:27:02.255361 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:02.374195 systemd[1]: Created slice kubepods-besteffort-pod745806cb_ef5b_4be6_8af9_5873b4bd93f8.slice. Sep 9 00:27:02.379656 systemd[1]: Created slice kubepods-burstable-podc281c29b_359e_4ce4_9704_d89c49a247ae.slice. Sep 9 00:27:02.494493 kubelet[1418]: I0909 00:27:02.494442 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhfd7\" (UniqueName: \"kubernetes.io/projected/745806cb-ef5b-4be6-8af9-5873b4bd93f8-kube-api-access-fhfd7\") pod \"cilium-operator-6c4d7847fc-mdrd6\" (UID: \"745806cb-ef5b-4be6-8af9-5873b4bd93f8\") " pod="kube-system/cilium-operator-6c4d7847fc-mdrd6" Sep 9 00:27:02.494493 kubelet[1418]: I0909 00:27:02.494492 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-run\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.494660 kubelet[1418]: I0909 00:27:02.494513 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-bpf-maps\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.494660 kubelet[1418]: I0909 00:27:02.494532 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-cgroup\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.494660 kubelet[1418]: I0909 00:27:02.494552 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-cni-path\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.494660 kubelet[1418]: I0909 00:27:02.494566 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-xtables-lock\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.494660 kubelet[1418]: I0909 00:27:02.494609 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-etc-cni-netd\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.494660 kubelet[1418]: I0909 00:27:02.494624 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-lib-modules\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.494869 kubelet[1418]: I0909 00:27:02.494641 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c281c29b-359e-4ce4-9704-d89c49a247ae-clustermesh-secrets\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.494869 kubelet[1418]: I0909 00:27:02.494656 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-config-path\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.494869 kubelet[1418]: I0909 00:27:02.494673 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-ipsec-secrets\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.494869 kubelet[1418]: I0909 00:27:02.494701 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-host-proc-sys-net\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.494869 kubelet[1418]: I0909 00:27:02.494719 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-hostproc\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.495032 kubelet[1418]: I0909 00:27:02.494734 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-host-proc-sys-kernel\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.495032 kubelet[1418]: I0909 00:27:02.494748 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c281c29b-359e-4ce4-9704-d89c49a247ae-hubble-tls\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.495032 kubelet[1418]: I0909 00:27:02.494765 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj7x7\" (UniqueName: \"kubernetes.io/projected/c281c29b-359e-4ce4-9704-d89c49a247ae-kube-api-access-pj7x7\") pod \"cilium-czf57\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " pod="kube-system/cilium-czf57" Sep 9 00:27:02.495032 kubelet[1418]: I0909 00:27:02.494925 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/745806cb-ef5b-4be6-8af9-5873b4bd93f8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-mdrd6\" (UID: \"745806cb-ef5b-4be6-8af9-5873b4bd93f8\") " pod="kube-system/cilium-operator-6c4d7847fc-mdrd6" Sep 9 00:27:02.538069 kubelet[1418]: E0909 00:27:02.537983 1418 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-pj7x7 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-czf57" podUID="c281c29b-359e-4ce4-9704-d89c49a247ae" Sep 9 00:27:02.677205 kubelet[1418]: E0909 00:27:02.677141 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:02.678046 env[1214]: time="2025-09-09T00:27:02.677630041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mdrd6,Uid:745806cb-ef5b-4be6-8af9-5873b4bd93f8,Namespace:kube-system,Attempt:0,}" Sep 9 00:27:02.696081 env[1214]: time="2025-09-09T00:27:02.696010545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:27:02.696242 env[1214]: time="2025-09-09T00:27:02.696220704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:27:02.696331 env[1214]: time="2025-09-09T00:27:02.696309903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:27:02.697161 env[1214]: time="2025-09-09T00:27:02.696571102Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd52d6aa9e8fb35ce9e37ce586f5c0a982dab4dcb67c0aef73fed10d2a74ebed pid=2979 runtime=io.containerd.runc.v2 Sep 9 00:27:02.697240 kubelet[1418]: I0909 00:27:02.697156 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-cni-path\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.697240 kubelet[1418]: I0909 00:27:02.697225 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-lib-modules\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.697310 kubelet[1418]: I0909 00:27:02.697242 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-etc-cni-netd\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.697310 kubelet[1418]: I0909 00:27:02.697298 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-xtables-lock\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.697362 kubelet[1418]: I0909 00:27:02.697314 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-host-proc-sys-net\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.697362 kubelet[1418]: I0909 00:27:02.697329 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-cgroup\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.697473 kubelet[1418]: I0909 00:27:02.697378 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-config-path\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.697855 kubelet[1418]: I0909 00:27:02.697824 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:02.698352 kubelet[1418]: I0909 00:27:02.697835 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-cni-path" (OuterVolumeSpecName: "cni-path") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:02.698450 kubelet[1418]: I0909 00:27:02.697855 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:02.698523 kubelet[1418]: I0909 00:27:02.697873 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:02.698577 kubelet[1418]: I0909 00:27:02.697883 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:02.698634 kubelet[1418]: I0909 00:27:02.697895 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:02.699219 kubelet[1418]: I0909 00:27:02.699187 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:27:02.712504 systemd[1]: Started cri-containerd-fd52d6aa9e8fb35ce9e37ce586f5c0a982dab4dcb67c0aef73fed10d2a74ebed.scope. Sep 9 00:27:02.756869 env[1214]: time="2025-09-09T00:27:02.755333876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mdrd6,Uid:745806cb-ef5b-4be6-8af9-5873b4bd93f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd52d6aa9e8fb35ce9e37ce586f5c0a982dab4dcb67c0aef73fed10d2a74ebed\"" Sep 9 00:27:02.757000 kubelet[1418]: E0909 00:27:02.755980 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:02.757282 env[1214]: time="2025-09-09T00:27:02.757230946Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:27:02.799055 kubelet[1418]: I0909 00:27:02.798384 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-hostproc\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.799055 kubelet[1418]: I0909 00:27:02.798433 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-hostproc" (OuterVolumeSpecName: "hostproc") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:02.799055 kubelet[1418]: I0909 00:27:02.798450 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c281c29b-359e-4ce4-9704-d89c49a247ae-clustermesh-secrets\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.799055 kubelet[1418]: I0909 00:27:02.798496 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c281c29b-359e-4ce4-9704-d89c49a247ae-hubble-tls\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.799055 kubelet[1418]: I0909 00:27:02.798511 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-run\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.799055 kubelet[1418]: I0909 00:27:02.798546 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj7x7\" (UniqueName: \"kubernetes.io/projected/c281c29b-359e-4ce4-9704-d89c49a247ae-kube-api-access-pj7x7\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.799274 kubelet[1418]: I0909 00:27:02.798562 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-host-proc-sys-kernel\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.799274 kubelet[1418]: I0909 00:27:02.798581 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-ipsec-secrets\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.799274 kubelet[1418]: I0909 00:27:02.798595 1418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-bpf-maps\") pod \"c281c29b-359e-4ce4-9704-d89c49a247ae\" (UID: \"c281c29b-359e-4ce4-9704-d89c49a247ae\") " Sep 9 00:27:02.799274 kubelet[1418]: I0909 00:27:02.798632 1418 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-lib-modules\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.799274 kubelet[1418]: I0909 00:27:02.798641 1418 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-etc-cni-netd\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.799274 kubelet[1418]: I0909 00:27:02.798649 1418 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-xtables-lock\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.799274 kubelet[1418]: I0909 00:27:02.798656 1418 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-host-proc-sys-net\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.799428 kubelet[1418]: I0909 00:27:02.798665 1418 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-cgroup\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.799428 kubelet[1418]: I0909 00:27:02.798672 1418 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-config-path\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.799428 kubelet[1418]: I0909 00:27:02.798680 1418 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-hostproc\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.799428 kubelet[1418]: I0909 00:27:02.798688 1418 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-cni-path\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.799428 kubelet[1418]: I0909 00:27:02.798735 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:02.799428 kubelet[1418]: I0909 00:27:02.799180 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:02.799556 kubelet[1418]: I0909 00:27:02.799213 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:02.801444 kubelet[1418]: I0909 00:27:02.801415 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c281c29b-359e-4ce4-9704-d89c49a247ae-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:27:02.803245 kubelet[1418]: I0909 00:27:02.803192 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c281c29b-359e-4ce4-9704-d89c49a247ae-kube-api-access-pj7x7" (OuterVolumeSpecName: "kube-api-access-pj7x7") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "kube-api-access-pj7x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:27:02.803671 kubelet[1418]: I0909 00:27:02.803515 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:27:02.803799 kubelet[1418]: I0909 00:27:02.803758 1418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c281c29b-359e-4ce4-9704-d89c49a247ae-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c281c29b-359e-4ce4-9704-d89c49a247ae" (UID: "c281c29b-359e-4ce4-9704-d89c49a247ae"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:27:02.898986 kubelet[1418]: I0909 00:27:02.898933 1418 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-host-proc-sys-kernel\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.898986 kubelet[1418]: I0909 00:27:02.898971 1418 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-ipsec-secrets\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.898986 kubelet[1418]: I0909 00:27:02.898980 1418 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-bpf-maps\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.898986 kubelet[1418]: I0909 00:27:02.898991 1418 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c281c29b-359e-4ce4-9704-d89c49a247ae-clustermesh-secrets\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.898986 kubelet[1418]: I0909 00:27:02.898999 1418 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c281c29b-359e-4ce4-9704-d89c49a247ae-hubble-tls\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.899237 kubelet[1418]: I0909 00:27:02.899007 1418 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c281c29b-359e-4ce4-9704-d89c49a247ae-cilium-run\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:02.899237 kubelet[1418]: I0909 00:27:02.899015 1418 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pj7x7\" (UniqueName: \"kubernetes.io/projected/c281c29b-359e-4ce4-9704-d89c49a247ae-kube-api-access-pj7x7\") on node \"10.0.0.40\" DevicePath \"\"" Sep 9 00:27:03.256374 kubelet[1418]: E0909 00:27:03.256266 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:03.587336 systemd[1]: Removed slice kubepods-burstable-podc281c29b_359e_4ce4_9704_d89c49a247ae.slice. Sep 9 00:27:03.601609 systemd[1]: var-lib-kubelet-pods-c281c29b\x2d359e\x2d4ce4\x2d9704\x2dd89c49a247ae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpj7x7.mount: Deactivated successfully. Sep 9 00:27:03.601751 systemd[1]: var-lib-kubelet-pods-c281c29b\x2d359e\x2d4ce4\x2d9704\x2dd89c49a247ae-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:27:03.601830 systemd[1]: var-lib-kubelet-pods-c281c29b\x2d359e\x2d4ce4\x2d9704\x2dd89c49a247ae-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 9 00:27:03.601881 systemd[1]: var-lib-kubelet-pods-c281c29b\x2d359e\x2d4ce4\x2d9704\x2dd89c49a247ae-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:27:03.639078 systemd[1]: Created slice kubepods-burstable-podbbac65e7_1495_4bbe_b5b8_890aa732b70c.slice. Sep 9 00:27:03.716513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143392183.mount: Deactivated successfully. Sep 9 00:27:03.808334 kubelet[1418]: I0909 00:27:03.808286 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bbac65e7-1495-4bbe-b5b8-890aa732b70c-clustermesh-secrets\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.808334 kubelet[1418]: I0909 00:27:03.808331 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bbac65e7-1495-4bbe-b5b8-890aa732b70c-hubble-tls\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.808334 kubelet[1418]: I0909 00:27:03.808348 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bbac65e7-1495-4bbe-b5b8-890aa732b70c-bpf-maps\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.808334 kubelet[1418]: I0909 00:27:03.808401 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bbac65e7-1495-4bbe-b5b8-890aa732b70c-hostproc\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.808646 kubelet[1418]: I0909 00:27:03.808446 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bbac65e7-1495-4bbe-b5b8-890aa732b70c-host-proc-sys-net\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.808646 kubelet[1418]: I0909 00:27:03.808463 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bbac65e7-1495-4bbe-b5b8-890aa732b70c-host-proc-sys-kernel\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.808646 kubelet[1418]: I0909 00:27:03.808479 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slltg\" (UniqueName: \"kubernetes.io/projected/bbac65e7-1495-4bbe-b5b8-890aa732b70c-kube-api-access-slltg\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.808646 kubelet[1418]: I0909 00:27:03.808494 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bbac65e7-1495-4bbe-b5b8-890aa732b70c-cni-path\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.808646 kubelet[1418]: I0909 00:27:03.808508 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbac65e7-1495-4bbe-b5b8-890aa732b70c-etc-cni-netd\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.809112 kubelet[1418]: I0909 00:27:03.808524 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbac65e7-1495-4bbe-b5b8-890aa732b70c-cilium-config-path\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.809112 kubelet[1418]: I0909 00:27:03.808538 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bbac65e7-1495-4bbe-b5b8-890aa732b70c-cilium-ipsec-secrets\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.809112 kubelet[1418]: I0909 00:27:03.808554 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bbac65e7-1495-4bbe-b5b8-890aa732b70c-cilium-run\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.809112 kubelet[1418]: I0909 00:27:03.808584 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bbac65e7-1495-4bbe-b5b8-890aa732b70c-cilium-cgroup\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.809112 kubelet[1418]: I0909 00:27:03.808608 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbac65e7-1495-4bbe-b5b8-890aa732b70c-lib-modules\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.809112 kubelet[1418]: I0909 00:27:03.808648 1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbac65e7-1495-4bbe-b5b8-890aa732b70c-xtables-lock\") pod \"cilium-7hklt\" (UID: \"bbac65e7-1495-4bbe-b5b8-890aa732b70c\") " pod="kube-system/cilium-7hklt" Sep 9 00:27:03.950640 kubelet[1418]: E0909 00:27:03.949770 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:03.950774 env[1214]: time="2025-09-09T00:27:03.950275999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hklt,Uid:bbac65e7-1495-4bbe-b5b8-890aa732b70c,Namespace:kube-system,Attempt:0,}" Sep 9 00:27:03.964805 env[1214]: time="2025-09-09T00:27:03.964715606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:27:03.964920 env[1214]: time="2025-09-09T00:27:03.964812326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:27:03.964920 env[1214]: time="2025-09-09T00:27:03.964839326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:27:03.965145 env[1214]: time="2025-09-09T00:27:03.965111764Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/48839c081bd45fbfd1e9638f855abd48fc45e8698c741c4756de5f76d649b972 pid=3028 runtime=io.containerd.runc.v2 Sep 9 00:27:03.978634 systemd[1]: Started cri-containerd-48839c081bd45fbfd1e9638f855abd48fc45e8698c741c4756de5f76d649b972.scope. Sep 9 00:27:04.007263 env[1214]: time="2025-09-09T00:27:04.007214673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hklt,Uid:bbac65e7-1495-4bbe-b5b8-890aa732b70c,Namespace:kube-system,Attempt:0,} returns sandbox id \"48839c081bd45fbfd1e9638f855abd48fc45e8698c741c4756de5f76d649b972\"" Sep 9 00:27:04.008349 kubelet[1418]: E0909 00:27:04.007861 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:04.013420 env[1214]: time="2025-09-09T00:27:04.013384442Z" level=info msg="CreateContainer within sandbox \"48839c081bd45fbfd1e9638f855abd48fc45e8698c741c4756de5f76d649b972\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:27:04.024549 env[1214]: time="2025-09-09T00:27:04.024509348Z" level=info msg="CreateContainer within sandbox \"48839c081bd45fbfd1e9638f855abd48fc45e8698c741c4756de5f76d649b972\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cdb3cb4591bc9ed91a859091c41a7bb705eb33888aeb0314d4fadaa9862649b3\"" Sep 9 00:27:04.025254 env[1214]: time="2025-09-09T00:27:04.025224864Z" level=info msg="StartContainer for \"cdb3cb4591bc9ed91a859091c41a7bb705eb33888aeb0314d4fadaa9862649b3\"" Sep 9 00:27:04.040399 systemd[1]: Started cri-containerd-cdb3cb4591bc9ed91a859091c41a7bb705eb33888aeb0314d4fadaa9862649b3.scope. Sep 9 00:27:04.077240 env[1214]: time="2025-09-09T00:27:04.077193810Z" level=info msg="StartContainer for \"cdb3cb4591bc9ed91a859091c41a7bb705eb33888aeb0314d4fadaa9862649b3\" returns successfully" Sep 9 00:27:04.080081 systemd[1]: cri-containerd-cdb3cb4591bc9ed91a859091c41a7bb705eb33888aeb0314d4fadaa9862649b3.scope: Deactivated successfully. Sep 9 00:27:04.142306 env[1214]: time="2025-09-09T00:27:04.142255451Z" level=info msg="shim disconnected" id=cdb3cb4591bc9ed91a859091c41a7bb705eb33888aeb0314d4fadaa9862649b3 Sep 9 00:27:04.142306 env[1214]: time="2025-09-09T00:27:04.142303451Z" level=warning msg="cleaning up after shim disconnected" id=cdb3cb4591bc9ed91a859091c41a7bb705eb33888aeb0314d4fadaa9862649b3 namespace=k8s.io Sep 9 00:27:04.142306 env[1214]: time="2025-09-09T00:27:04.142313291Z" level=info msg="cleaning up dead shim" Sep 9 00:27:04.151958 env[1214]: time="2025-09-09T00:27:04.151846524Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:27:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3112 runtime=io.containerd.runc.v2\n" Sep 9 00:27:04.258079 kubelet[1418]: E0909 00:27:04.257303 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:04.326804 env[1214]: time="2025-09-09T00:27:04.326742428Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:27:04.328470 env[1214]: time="2025-09-09T00:27:04.328440620Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:27:04.330405 env[1214]: time="2025-09-09T00:27:04.330366410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:27:04.330935 env[1214]: time="2025-09-09T00:27:04.330908287Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 00:27:04.337363 env[1214]: time="2025-09-09T00:27:04.337326856Z" level=info msg="CreateContainer within sandbox \"fd52d6aa9e8fb35ce9e37ce586f5c0a982dab4dcb67c0aef73fed10d2a74ebed\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:27:04.347429 env[1214]: time="2025-09-09T00:27:04.347384127Z" level=info msg="CreateContainer within sandbox \"fd52d6aa9e8fb35ce9e37ce586f5c0a982dab4dcb67c0aef73fed10d2a74ebed\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"801d83cc62d0c9de3335aca8f74247a4b0c4ebbebd75c0dc19dc83a74b8eff9a\"" Sep 9 00:27:04.348347 env[1214]: time="2025-09-09T00:27:04.347909444Z" level=info msg="StartContainer for \"801d83cc62d0c9de3335aca8f74247a4b0c4ebbebd75c0dc19dc83a74b8eff9a\"" Sep 9 00:27:04.366020 systemd[1]: Started cri-containerd-801d83cc62d0c9de3335aca8f74247a4b0c4ebbebd75c0dc19dc83a74b8eff9a.scope. Sep 9 00:27:04.412070 env[1214]: time="2025-09-09T00:27:04.412015810Z" level=info msg="StartContainer for \"801d83cc62d0c9de3335aca8f74247a4b0c4ebbebd75c0dc19dc83a74b8eff9a\" returns successfully" Sep 9 00:27:04.415283 kubelet[1418]: I0909 00:27:04.415237 1418 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c281c29b-359e-4ce4-9704-d89c49a247ae" path="/var/lib/kubelet/pods/c281c29b-359e-4ce4-9704-d89c49a247ae/volumes" Sep 9 00:27:04.585737 kubelet[1418]: E0909 00:27:04.585703 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:04.587622 kubelet[1418]: E0909 00:27:04.587569 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:04.592067 env[1214]: time="2025-09-09T00:27:04.592011049Z" level=info msg="CreateContainer within sandbox \"48839c081bd45fbfd1e9638f855abd48fc45e8698c741c4756de5f76d649b972\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:27:04.597023 kubelet[1418]: I0909 00:27:04.596964 1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-mdrd6" podStartSLOduration=1.020810335 podStartE2EDuration="2.596948865s" podCreationTimestamp="2025-09-09 00:27:02 +0000 UTC" firstStartedPulling="2025-09-09 00:27:02.756952187 +0000 UTC m=+58.802507618" lastFinishedPulling="2025-09-09 00:27:04.333090677 +0000 UTC m=+60.378646148" observedRunningTime="2025-09-09 00:27:04.596601906 +0000 UTC m=+60.642157337" watchObservedRunningTime="2025-09-09 00:27:04.596948865 +0000 UTC m=+60.642504336" Sep 9 00:27:04.612098 env[1214]: time="2025-09-09T00:27:04.612032791Z" level=info msg="CreateContainer within sandbox \"48839c081bd45fbfd1e9638f855abd48fc45e8698c741c4756de5f76d649b972\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"48cca1f5feea596088cf98c36c81713c48dec7239f7f09d54b2f5ba2822cec7b\"" Sep 9 00:27:04.615065 env[1214]: time="2025-09-09T00:27:04.614819097Z" level=info msg="StartContainer for \"48cca1f5feea596088cf98c36c81713c48dec7239f7f09d54b2f5ba2822cec7b\"" Sep 9 00:27:04.632786 systemd[1]: Started cri-containerd-48cca1f5feea596088cf98c36c81713c48dec7239f7f09d54b2f5ba2822cec7b.scope. Sep 9 00:27:04.662218 env[1214]: time="2025-09-09T00:27:04.662156745Z" level=info msg="StartContainer for \"48cca1f5feea596088cf98c36c81713c48dec7239f7f09d54b2f5ba2822cec7b\" returns successfully" Sep 9 00:27:04.668138 systemd[1]: cri-containerd-48cca1f5feea596088cf98c36c81713c48dec7239f7f09d54b2f5ba2822cec7b.scope: Deactivated successfully. Sep 9 00:27:04.715049 env[1214]: time="2025-09-09T00:27:04.714675048Z" level=info msg="shim disconnected" id=48cca1f5feea596088cf98c36c81713c48dec7239f7f09d54b2f5ba2822cec7b Sep 9 00:27:04.715049 env[1214]: time="2025-09-09T00:27:04.714731448Z" level=warning msg="cleaning up after shim disconnected" id=48cca1f5feea596088cf98c36c81713c48dec7239f7f09d54b2f5ba2822cec7b namespace=k8s.io Sep 9 00:27:04.715049 env[1214]: time="2025-09-09T00:27:04.714741448Z" level=info msg="cleaning up dead shim" Sep 9 00:27:04.721727 env[1214]: time="2025-09-09T00:27:04.721670574Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:27:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3213 runtime=io.containerd.runc.v2\n" Sep 9 00:27:05.257667 kubelet[1418]: E0909 00:27:05.257604 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:05.592556 kubelet[1418]: E0909 00:27:05.592329 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:05.593141 kubelet[1418]: E0909 00:27:05.593054 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:05.597204 env[1214]: time="2025-09-09T00:27:05.597165212Z" level=info msg="CreateContainer within sandbox \"48839c081bd45fbfd1e9638f855abd48fc45e8698c741c4756de5f76d649b972\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:27:05.600047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48cca1f5feea596088cf98c36c81713c48dec7239f7f09d54b2f5ba2822cec7b-rootfs.mount: Deactivated successfully. Sep 9 00:27:05.616767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1059423315.mount: Deactivated successfully. Sep 9 00:27:05.619039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782734341.mount: Deactivated successfully. Sep 9 00:27:05.619309 env[1214]: time="2025-09-09T00:27:05.619264387Z" level=info msg="CreateContainer within sandbox \"48839c081bd45fbfd1e9638f855abd48fc45e8698c741c4756de5f76d649b972\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aaa10a071fb21bb7b51f274963db5c9da8b821e44916247a73c8088cb0184052\"" Sep 9 00:27:05.619972 env[1214]: time="2025-09-09T00:27:05.619943184Z" level=info msg="StartContainer for \"aaa10a071fb21bb7b51f274963db5c9da8b821e44916247a73c8088cb0184052\"" Sep 9 00:27:05.641630 systemd[1]: Started cri-containerd-aaa10a071fb21bb7b51f274963db5c9da8b821e44916247a73c8088cb0184052.scope. Sep 9 00:27:05.674667 env[1214]: time="2025-09-09T00:27:05.674141726Z" level=info msg="StartContainer for \"aaa10a071fb21bb7b51f274963db5c9da8b821e44916247a73c8088cb0184052\" returns successfully" Sep 9 00:27:05.674813 systemd[1]: cri-containerd-aaa10a071fb21bb7b51f274963db5c9da8b821e44916247a73c8088cb0184052.scope: Deactivated successfully. Sep 9 00:27:05.696569 env[1214]: time="2025-09-09T00:27:05.696525380Z" level=info msg="shim disconnected" id=aaa10a071fb21bb7b51f274963db5c9da8b821e44916247a73c8088cb0184052 Sep 9 00:27:05.696916 env[1214]: time="2025-09-09T00:27:05.696893418Z" level=warning msg="cleaning up after shim disconnected" id=aaa10a071fb21bb7b51f274963db5c9da8b821e44916247a73c8088cb0184052 namespace=k8s.io Sep 9 00:27:05.697014 env[1214]: time="2025-09-09T00:27:05.696999218Z" level=info msg="cleaning up dead shim" Sep 9 00:27:05.703112 env[1214]: time="2025-09-09T00:27:05.703080229Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:27:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3270 runtime=io.containerd.runc.v2\n" Sep 9 00:27:06.215502 kubelet[1418]: E0909 00:27:06.215448 1418 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:06.232500 env[1214]: time="2025-09-09T00:27:06.232464303Z" level=info msg="StopPodSandbox for \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\"" Sep 9 00:27:06.232764 env[1214]: time="2025-09-09T00:27:06.232714422Z" level=info msg="TearDown network for sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" successfully" Sep 9 00:27:06.232860 env[1214]: time="2025-09-09T00:27:06.232840542Z" level=info msg="StopPodSandbox for \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" returns successfully" Sep 9 00:27:06.233498 env[1214]: time="2025-09-09T00:27:06.233259980Z" level=info msg="RemovePodSandbox for \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\"" Sep 9 00:27:06.233498 env[1214]: time="2025-09-09T00:27:06.233290900Z" level=info msg="Forcibly stopping sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\"" Sep 9 00:27:06.233498 env[1214]: time="2025-09-09T00:27:06.233352099Z" level=info msg="TearDown network for sandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" successfully" Sep 9 00:27:06.237144 env[1214]: time="2025-09-09T00:27:06.237100042Z" level=info msg="RemovePodSandbox \"8ea54c05fdc56e6fae4177962b4f14c309bf5f081f9dc818c8e02ccb201a3174\" returns successfully" Sep 9 00:27:06.258567 kubelet[1418]: E0909 00:27:06.258455 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:06.387874 kubelet[1418]: E0909 00:27:06.387841 1418 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:27:06.595782 kubelet[1418]: E0909 00:27:06.595753 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:06.600164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaa10a071fb21bb7b51f274963db5c9da8b821e44916247a73c8088cb0184052-rootfs.mount: Deactivated successfully. Sep 9 00:27:06.602278 env[1214]: time="2025-09-09T00:27:06.602238836Z" level=info msg="CreateContainer within sandbox \"48839c081bd45fbfd1e9638f855abd48fc45e8698c741c4756de5f76d649b972\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:27:06.619976 env[1214]: time="2025-09-09T00:27:06.619920554Z" level=info msg="CreateContainer within sandbox \"48839c081bd45fbfd1e9638f855abd48fc45e8698c741c4756de5f76d649b972\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7ea871d3d6ad2bacdc87db9e58ebf2919bad3693c46a176862892eeb9ab86172\"" Sep 9 00:27:06.620738 env[1214]: time="2025-09-09T00:27:06.620707751Z" level=info msg="StartContainer for \"7ea871d3d6ad2bacdc87db9e58ebf2919bad3693c46a176862892eeb9ab86172\"" Sep 9 00:27:06.639733 systemd[1]: Started cri-containerd-7ea871d3d6ad2bacdc87db9e58ebf2919bad3693c46a176862892eeb9ab86172.scope. Sep 9 00:27:06.677956 systemd[1]: cri-containerd-7ea871d3d6ad2bacdc87db9e58ebf2919bad3693c46a176862892eeb9ab86172.scope: Deactivated successfully. Sep 9 00:27:06.685073 env[1214]: time="2025-09-09T00:27:06.685005974Z" level=info msg="StartContainer for \"7ea871d3d6ad2bacdc87db9e58ebf2919bad3693c46a176862892eeb9ab86172\" returns successfully" Sep 9 00:27:06.705811 env[1214]: time="2025-09-09T00:27:06.705667358Z" level=info msg="shim disconnected" id=7ea871d3d6ad2bacdc87db9e58ebf2919bad3693c46a176862892eeb9ab86172 Sep 9 00:27:06.705988 env[1214]: time="2025-09-09T00:27:06.705816998Z" level=warning msg="cleaning up after shim disconnected" id=7ea871d3d6ad2bacdc87db9e58ebf2919bad3693c46a176862892eeb9ab86172 namespace=k8s.io Sep 9 00:27:06.705988 env[1214]: time="2025-09-09T00:27:06.705831037Z" level=info msg="cleaning up dead shim" Sep 9 00:27:06.712768 env[1214]: time="2025-09-09T00:27:06.712720246Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:27:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3328 runtime=io.containerd.runc.v2\n" Sep 9 00:27:07.258833 kubelet[1418]: E0909 00:27:07.258781 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:07.601809 systemd[1]: run-containerd-runc-k8s.io-7ea871d3d6ad2bacdc87db9e58ebf2919bad3693c46a176862892eeb9ab86172-runc.FDmEJb.mount: Deactivated successfully. Sep 9 00:27:07.602135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ea871d3d6ad2bacdc87db9e58ebf2919bad3693c46a176862892eeb9ab86172-rootfs.mount: Deactivated successfully. Sep 9 00:27:07.603698 kubelet[1418]: E0909 00:27:07.603657 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:07.609569 env[1214]: time="2025-09-09T00:27:07.609515901Z" level=info msg="CreateContainer within sandbox \"48839c081bd45fbfd1e9638f855abd48fc45e8698c741c4756de5f76d649b972\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:27:07.628964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1069951690.mount: Deactivated successfully. Sep 9 00:27:07.638494 env[1214]: time="2025-09-09T00:27:07.638423091Z" level=info msg="CreateContainer within sandbox \"48839c081bd45fbfd1e9638f855abd48fc45e8698c741c4756de5f76d649b972\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ae893efa5a29e0119c0db23aa79d807810dfebbf222355f5c82107a3764a44ac\"" Sep 9 00:27:07.639195 env[1214]: time="2025-09-09T00:27:07.639152688Z" level=info msg="StartContainer for \"ae893efa5a29e0119c0db23aa79d807810dfebbf222355f5c82107a3764a44ac\"" Sep 9 00:27:07.658366 systemd[1]: Started cri-containerd-ae893efa5a29e0119c0db23aa79d807810dfebbf222355f5c82107a3764a44ac.scope. Sep 9 00:27:07.696920 env[1214]: time="2025-09-09T00:27:07.696873989Z" level=info msg="StartContainer for \"ae893efa5a29e0119c0db23aa79d807810dfebbf222355f5c82107a3764a44ac\" returns successfully" Sep 9 00:27:07.938723 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 9 00:27:08.069810 kubelet[1418]: I0909 00:27:08.069310 1418 setters.go:618] "Node became not ready" node="10.0.0.40" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T00:27:08Z","lastTransitionTime":"2025-09-09T00:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 00:27:08.259606 kubelet[1418]: E0909 00:27:08.259538 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:08.608262 kubelet[1418]: E0909 00:27:08.608230 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:08.823008 systemd[1]: run-containerd-runc-k8s.io-ae893efa5a29e0119c0db23aa79d807810dfebbf222355f5c82107a3764a44ac-runc.P5jppX.mount: Deactivated successfully. Sep 9 00:27:09.259931 kubelet[1418]: E0909 00:27:09.259881 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:09.951169 kubelet[1418]: E0909 00:27:09.951120 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:10.260608 kubelet[1418]: E0909 00:27:10.260496 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:10.792676 systemd-networkd[1039]: lxc_health: Link UP Sep 9 00:27:10.800029 systemd-networkd[1039]: lxc_health: Gained carrier Sep 9 00:27:10.800726 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 9 00:27:11.261415 kubelet[1418]: E0909 00:27:11.261300 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:11.951377 kubelet[1418]: E0909 00:27:11.951342 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:11.969092 kubelet[1418]: I0909 00:27:11.969032 1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7hklt" podStartSLOduration=8.969016586 podStartE2EDuration="8.969016586s" podCreationTimestamp="2025-09-09 00:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:27:08.628508319 +0000 UTC m=+64.674063790" watchObservedRunningTime="2025-09-09 00:27:11.969016586 +0000 UTC m=+68.014572017" Sep 9 00:27:12.157835 systemd-networkd[1039]: lxc_health: Gained IPv6LL Sep 9 00:27:12.262059 kubelet[1418]: E0909 00:27:12.261947 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:12.615579 kubelet[1418]: E0909 00:27:12.615547 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:13.055810 systemd[1]: run-containerd-runc-k8s.io-ae893efa5a29e0119c0db23aa79d807810dfebbf222355f5c82107a3764a44ac-runc.QmppiC.mount: Deactivated successfully. Sep 9 00:27:13.262631 kubelet[1418]: E0909 00:27:13.262594 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:13.617404 kubelet[1418]: E0909 00:27:13.617372 1418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:14.263971 kubelet[1418]: E0909 00:27:14.263922 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:15.264645 kubelet[1418]: E0909 00:27:15.264588 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:16.265159 kubelet[1418]: E0909 00:27:16.265076 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:17.265593 kubelet[1418]: E0909 00:27:17.265534 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:18.266682 kubelet[1418]: E0909 00:27:18.266614 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:27:19.267096 kubelet[1418]: E0909 00:27:19.267052 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"