Sep 6 00:01:33.712274 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 6 00:01:33.712294 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 5 23:00:12 -00 2025 Sep 6 00:01:33.712302 kernel: efi: EFI v2.70 by EDK II Sep 6 00:01:33.712308 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 6 00:01:33.712313 kernel: random: crng init done Sep 6 00:01:33.712319 kernel: ACPI: Early table checksum verification disabled Sep 6 00:01:33.712325 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 6 00:01:33.712332 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 6 00:01:33.712338 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:01:33.712343 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:01:33.712350 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:01:33.712355 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:01:33.712360 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:01:33.712366 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:01:33.712375 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:01:33.712380 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:01:33.712386 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:01:33.712411 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 6 00:01:33.712424 kernel: NUMA: Failed to initialise from firmware Sep 6 00:01:33.712431 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:01:33.712436 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Sep 6 00:01:33.712442 kernel: Zone ranges: Sep 6 00:01:33.712448 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:01:33.712455 kernel: DMA32 empty Sep 6 00:01:33.712461 kernel: Normal empty Sep 6 00:01:33.712466 kernel: Movable zone start for each node Sep 6 00:01:33.712472 kernel: Early memory node ranges Sep 6 00:01:33.712478 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 6 00:01:33.712484 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 6 00:01:33.712489 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 6 00:01:33.712496 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 6 00:01:33.712501 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 6 00:01:33.712507 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 6 00:01:33.712513 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 6 00:01:33.712519 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:01:33.712526 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 6 00:01:33.712532 kernel: psci: probing for conduit method from ACPI. Sep 6 00:01:33.712537 kernel: psci: PSCIv1.1 detected in firmware. Sep 6 00:01:33.712543 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 00:01:33.712549 kernel: psci: Trusted OS migration not required Sep 6 00:01:33.712557 kernel: psci: SMC Calling Convention v1.1 Sep 6 00:01:33.712563 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 6 00:01:33.712571 kernel: ACPI: SRAT not present Sep 6 00:01:33.712580 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 6 00:01:33.712595 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 6 00:01:33.712602 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 6 00:01:33.712608 kernel: Detected PIPT I-cache on CPU0 Sep 6 00:01:33.712614 kernel: CPU features: detected: GIC system register CPU interface Sep 6 00:01:33.712620 kernel: CPU features: detected: Hardware dirty bit management Sep 6 00:01:33.712626 kernel: CPU features: detected: Spectre-v4 Sep 6 00:01:33.712632 kernel: CPU features: detected: Spectre-BHB Sep 6 00:01:33.712640 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 00:01:33.712647 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 00:01:33.712653 kernel: CPU features: detected: ARM erratum 1418040 Sep 6 00:01:33.712659 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 6 00:01:33.712665 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 6 00:01:33.712671 kernel: Policy zone: DMA Sep 6 00:01:33.712679 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:01:33.712685 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:01:33.712691 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:01:33.712697 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:01:33.712704 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:01:33.712712 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Sep 6 00:01:33.712718 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 6 00:01:33.712724 kernel: trace event string verifier disabled Sep 6 00:01:33.712730 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 00:01:33.712737 kernel: rcu: RCU event tracing is enabled. Sep 6 00:01:33.712746 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 6 00:01:33.712752 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 00:01:33.712758 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:01:33.712764 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:01:33.712770 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 6 00:01:33.712777 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 00:01:33.712784 kernel: GICv3: 256 SPIs implemented Sep 6 00:01:33.712790 kernel: GICv3: 0 Extended SPIs implemented Sep 6 00:01:33.712797 kernel: GICv3: Distributor has no Range Selector support Sep 6 00:01:33.712803 kernel: Root IRQ handler: gic_handle_irq Sep 6 00:01:33.712809 kernel: GICv3: 16 PPIs implemented Sep 6 00:01:33.712815 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 6 00:01:33.712821 kernel: ACPI: SRAT not present Sep 6 00:01:33.712827 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 6 00:01:33.712833 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 6 00:01:33.712849 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 6 00:01:33.712855 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 6 00:01:33.712861 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 6 00:01:33.712869 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:01:33.712875 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 6 00:01:33.712882 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 6 00:01:33.712888 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 6 00:01:33.712894 kernel: arm-pv: using stolen time PV Sep 6 00:01:33.712901 kernel: Console: colour dummy device 80x25 Sep 6 00:01:33.712907 kernel: ACPI: Core revision 20210730 Sep 6 00:01:33.712913 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 6 00:01:33.712920 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:01:33.712926 kernel: LSM: Security Framework initializing Sep 6 00:01:33.712934 kernel: SELinux: Initializing. Sep 6 00:01:33.712940 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:01:33.712946 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:01:33.712953 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:01:33.712959 kernel: Platform MSI: ITS@0x8080000 domain created Sep 6 00:01:33.712965 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 6 00:01:33.712972 kernel: Remapping and enabling EFI services. Sep 6 00:01:33.712978 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:01:33.712984 kernel: Detected PIPT I-cache on CPU1 Sep 6 00:01:33.712992 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 6 00:01:33.712998 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 6 00:01:33.713005 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:01:33.713011 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 6 00:01:33.713017 kernel: Detected PIPT I-cache on CPU2 Sep 6 00:01:33.713024 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 6 00:01:33.713031 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 6 00:01:33.713037 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:01:33.713043 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 6 00:01:33.713050 kernel: Detected PIPT I-cache on CPU3 Sep 6 00:01:33.713057 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 6 00:01:33.713064 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 6 00:01:33.713070 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:01:33.713076 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 6 00:01:33.713087 kernel: smp: Brought up 1 node, 4 CPUs Sep 6 00:01:33.713095 kernel: SMP: Total of 4 processors activated. Sep 6 00:01:33.713102 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 00:01:33.713108 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 6 00:01:33.713115 kernel: CPU features: detected: Common not Private translations Sep 6 00:01:33.713122 kernel: CPU features: detected: CRC32 instructions Sep 6 00:01:33.713128 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 6 00:01:33.713135 kernel: CPU features: detected: LSE atomic instructions Sep 6 00:01:33.713143 kernel: CPU features: detected: Privileged Access Never Sep 6 00:01:33.713150 kernel: CPU features: detected: RAS Extension Support Sep 6 00:01:33.713156 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 6 00:01:33.713163 kernel: CPU: All CPU(s) started at EL1 Sep 6 00:01:33.713170 kernel: alternatives: patching kernel code Sep 6 00:01:33.713178 kernel: devtmpfs: initialized Sep 6 00:01:33.713184 kernel: KASLR enabled Sep 6 00:01:33.713191 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:01:33.713198 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 6 00:01:33.713205 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:01:33.713212 kernel: SMBIOS 3.0.0 present. Sep 6 00:01:33.713218 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 6 00:01:33.713225 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:01:33.713232 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 00:01:33.713240 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 00:01:33.713247 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 00:01:33.713253 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:01:33.713260 kernel: audit: type=2000 audit(0.037:1): state=initialized audit_enabled=0 res=1 Sep 6 00:01:33.713267 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:01:33.713275 kernel: cpuidle: using governor menu Sep 6 00:01:33.713282 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 00:01:33.713289 kernel: ASID allocator initialised with 32768 entries Sep 6 00:01:33.713295 kernel: ACPI: bus type PCI registered Sep 6 00:01:33.713303 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:01:33.713310 kernel: Serial: AMBA PL011 UART driver Sep 6 00:01:33.713316 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:01:33.713323 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 00:01:33.713330 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:01:33.713336 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 00:01:33.713343 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:01:33.713350 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 00:01:33.713357 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:01:33.713364 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:01:33.713371 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:01:33.713377 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:01:33.713384 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:01:33.713391 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:01:33.713398 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:01:33.713404 kernel: ACPI: Interpreter enabled Sep 6 00:01:33.713411 kernel: ACPI: Using GIC for interrupt routing Sep 6 00:01:33.713417 kernel: ACPI: MCFG table detected, 1 entries Sep 6 00:01:33.713426 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 6 00:01:33.713432 kernel: printk: console [ttyAMA0] enabled Sep 6 00:01:33.713439 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:01:33.713578 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:01:33.713659 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 00:01:33.713722 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 00:01:33.713782 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 6 00:01:33.713861 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 6 00:01:33.713872 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 6 00:01:33.713879 kernel: PCI host bridge to bus 0000:00 Sep 6 00:01:33.714004 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 6 00:01:33.714062 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 6 00:01:33.714116 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 6 00:01:33.714169 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:01:33.714249 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 6 00:01:33.714321 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 6 00:01:33.714388 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 6 00:01:33.714450 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 6 00:01:33.714511 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:01:33.714572 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:01:33.714647 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 6 00:01:33.714713 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 6 00:01:33.714769 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 6 00:01:33.714825 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 6 00:01:33.714985 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 6 00:01:33.714999 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 6 00:01:33.715006 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 6 00:01:33.715013 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 6 00:01:33.715020 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 6 00:01:33.715031 kernel: iommu: Default domain type: Translated Sep 6 00:01:33.715038 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 00:01:33.715045 kernel: vgaarb: loaded Sep 6 00:01:33.715052 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:01:33.715059 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:01:33.715066 kernel: PTP clock support registered Sep 6 00:01:33.715073 kernel: Registered efivars operations Sep 6 00:01:33.715079 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 00:01:33.715086 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:01:33.715095 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:01:33.715101 kernel: pnp: PnP ACPI init Sep 6 00:01:33.715184 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 6 00:01:33.715195 kernel: pnp: PnP ACPI: found 1 devices Sep 6 00:01:33.715202 kernel: NET: Registered PF_INET protocol family Sep 6 00:01:33.715209 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:01:33.715216 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:01:33.715223 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:01:33.715232 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:01:33.715239 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 00:01:33.715246 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:01:33.715253 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:01:33.715260 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:01:33.715267 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:01:33.715274 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:01:33.715281 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 6 00:01:33.715288 kernel: kvm [1]: HYP mode not available Sep 6 00:01:33.715297 kernel: Initialise system trusted keyrings Sep 6 00:01:33.715303 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:01:33.715310 kernel: Key type asymmetric registered Sep 6 00:01:33.715317 kernel: Asymmetric key parser 'x509' registered Sep 6 00:01:33.715324 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:01:33.715331 kernel: io scheduler mq-deadline registered Sep 6 00:01:33.715338 kernel: io scheduler kyber registered Sep 6 00:01:33.715345 kernel: io scheduler bfq registered Sep 6 00:01:33.715352 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 6 00:01:33.715360 kernel: ACPI: button: Power Button [PWRB] Sep 6 00:01:33.715368 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 6 00:01:33.715434 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 6 00:01:33.715444 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:01:33.715451 kernel: thunder_xcv, ver 1.0 Sep 6 00:01:33.715458 kernel: thunder_bgx, ver 1.0 Sep 6 00:01:33.715464 kernel: nicpf, ver 1.0 Sep 6 00:01:33.715471 kernel: nicvf, ver 1.0 Sep 6 00:01:33.715541 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 00:01:33.715617 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T00:01:33 UTC (1757116893) Sep 6 00:01:33.715627 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 00:01:33.715634 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:01:33.715641 kernel: Segment Routing with IPv6 Sep 6 00:01:33.715648 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:01:33.715654 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:01:33.715662 kernel: Key type dns_resolver registered Sep 6 00:01:33.715669 kernel: registered taskstats version 1 Sep 6 00:01:33.715677 kernel: Loading compiled-in X.509 certificates Sep 6 00:01:33.715684 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 72ab5ba99c2368429c7a4d04fccfc5a39dd84386' Sep 6 00:01:33.715691 kernel: Key type .fscrypt registered Sep 6 00:01:33.715698 kernel: Key type fscrypt-provisioning registered Sep 6 00:01:33.715704 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:01:33.715711 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:01:33.715718 kernel: ima: No architecture policies found Sep 6 00:01:33.715725 kernel: clk: Disabling unused clocks Sep 6 00:01:33.715731 kernel: Freeing unused kernel memory: 36416K Sep 6 00:01:33.715739 kernel: Run /init as init process Sep 6 00:01:33.715746 kernel: with arguments: Sep 6 00:01:33.715753 kernel: /init Sep 6 00:01:33.715760 kernel: with environment: Sep 6 00:01:33.715766 kernel: HOME=/ Sep 6 00:01:33.715773 kernel: TERM=linux Sep 6 00:01:33.715779 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:01:33.715788 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:01:33.715798 systemd[1]: Detected virtualization kvm. Sep 6 00:01:33.715806 systemd[1]: Detected architecture arm64. Sep 6 00:01:33.715813 systemd[1]: Running in initrd. Sep 6 00:01:33.715820 systemd[1]: No hostname configured, using default hostname. Sep 6 00:01:33.715827 systemd[1]: Hostname set to . Sep 6 00:01:33.715845 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:01:33.715854 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:01:33.715860 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:01:33.715869 systemd[1]: Reached target cryptsetup.target. Sep 6 00:01:33.715876 systemd[1]: Reached target paths.target. Sep 6 00:01:33.715883 systemd[1]: Reached target slices.target. Sep 6 00:01:33.715890 systemd[1]: Reached target swap.target. Sep 6 00:01:33.715897 systemd[1]: Reached target timers.target. Sep 6 00:01:33.715904 systemd[1]: Listening on iscsid.socket. Sep 6 00:01:33.715912 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:01:33.715920 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:01:33.715928 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:01:33.715935 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:01:33.715942 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:01:33.715949 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:01:33.715956 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:01:33.715963 systemd[1]: Reached target sockets.target. Sep 6 00:01:33.715971 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:01:33.715978 systemd[1]: Finished network-cleanup.service. Sep 6 00:01:33.715986 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:01:33.715993 systemd[1]: Starting systemd-journald.service... Sep 6 00:01:33.716000 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:01:33.716008 systemd[1]: Starting systemd-resolved.service... Sep 6 00:01:33.716015 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:01:33.716022 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:01:33.716029 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:01:33.716042 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:01:33.716050 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:01:33.716059 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:01:33.716066 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:01:33.716074 kernel: audit: type=1130 audit(1757116893.713:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.716085 systemd-journald[290]: Journal started Sep 6 00:01:33.716130 systemd-journald[290]: Runtime Journal (/run/log/journal/2bc359670e1d44ef92a9eff7771e6383) is 6.0M, max 48.7M, 42.6M free. Sep 6 00:01:33.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.705111 systemd-modules-load[291]: Inserted module 'overlay' Sep 6 00:01:33.717562 systemd[1]: Started systemd-journald.service. Sep 6 00:01:33.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.720909 kernel: audit: type=1130 audit(1757116893.718:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.729633 systemd-resolved[292]: Positive Trust Anchors: Sep 6 00:01:33.729648 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:01:33.729677 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:01:33.731942 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:01:33.745899 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:01:33.745921 kernel: audit: type=1130 audit(1757116893.738:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.745932 kernel: Bridge firewalling registered Sep 6 00:01:33.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.738999 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:01:33.745401 systemd-modules-load[291]: Inserted module 'br_netfilter' Sep 6 00:01:33.745428 systemd-resolved[292]: Defaulting to hostname 'linux'. Sep 6 00:01:33.754152 kernel: audit: type=1130 audit(1757116893.748:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.748465 systemd[1]: Started systemd-resolved.service. Sep 6 00:01:33.749882 systemd[1]: Reached target nss-lookup.target. Sep 6 00:01:33.758237 kernel: SCSI subsystem initialized Sep 6 00:01:33.759812 dracut-cmdline[308]: dracut-dracut-053 Sep 6 00:01:33.762093 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:01:33.767891 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:01:33.768282 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:01:33.768301 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:01:33.771781 systemd-modules-load[291]: Inserted module 'dm_multipath' Sep 6 00:01:33.772706 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:01:33.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.774350 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:01:33.777279 kernel: audit: type=1130 audit(1757116893.773:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.782602 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:01:33.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.786863 kernel: audit: type=1130 audit(1757116893.783:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.830867 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:01:33.843867 kernel: iscsi: registered transport (tcp) Sep 6 00:01:33.862867 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:01:33.862926 kernel: QLogic iSCSI HBA Driver Sep 6 00:01:33.914624 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:01:33.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.916383 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:01:33.919635 kernel: audit: type=1130 audit(1757116893.914:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:33.961872 kernel: raid6: neonx8 gen() 13406 MB/s Sep 6 00:01:33.978872 kernel: raid6: neonx8 xor() 10691 MB/s Sep 6 00:01:33.995861 kernel: raid6: neonx4 gen() 13444 MB/s Sep 6 00:01:34.012872 kernel: raid6: neonx4 xor() 11167 MB/s Sep 6 00:01:34.029857 kernel: raid6: neonx2 gen() 12839 MB/s Sep 6 00:01:34.046865 kernel: raid6: neonx2 xor() 10434 MB/s Sep 6 00:01:34.063876 kernel: raid6: neonx1 gen() 10428 MB/s Sep 6 00:01:34.080879 kernel: raid6: neonx1 xor() 8734 MB/s Sep 6 00:01:34.097888 kernel: raid6: int64x8 gen() 6070 MB/s Sep 6 00:01:34.115030 kernel: raid6: int64x8 xor() 3521 MB/s Sep 6 00:01:34.132177 kernel: raid6: int64x4 gen() 6943 MB/s Sep 6 00:01:34.148881 kernel: raid6: int64x4 xor() 3848 MB/s Sep 6 00:01:34.165921 kernel: raid6: int64x2 gen() 5813 MB/s Sep 6 00:01:34.182885 kernel: raid6: int64x2 xor() 3320 MB/s Sep 6 00:01:34.199886 kernel: raid6: int64x1 gen() 4952 MB/s Sep 6 00:01:34.217230 kernel: raid6: int64x1 xor() 2597 MB/s Sep 6 00:01:34.217293 kernel: raid6: using algorithm neonx4 gen() 13444 MB/s Sep 6 00:01:34.217303 kernel: raid6: .... xor() 11167 MB/s, rmw enabled Sep 6 00:01:34.217312 kernel: raid6: using neon recovery algorithm Sep 6 00:01:34.229896 kernel: xor: measuring software checksum speed Sep 6 00:01:34.229969 kernel: 8regs : 17184 MB/sec Sep 6 00:01:34.229979 kernel: 32regs : 19340 MB/sec Sep 6 00:01:34.231354 kernel: arm64_neon : 27917 MB/sec Sep 6 00:01:34.231401 kernel: xor: using function: arm64_neon (27917 MB/sec) Sep 6 00:01:34.287881 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 6 00:01:34.300192 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:01:34.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:34.302000 audit: BPF prog-id=7 op=LOAD Sep 6 00:01:34.307982 kernel: audit: type=1130 audit(1757116894.300:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:34.308023 kernel: audit: type=1334 audit(1757116894.302:10): prog-id=7 op=LOAD Sep 6 00:01:34.305000 audit: BPF prog-id=8 op=LOAD Sep 6 00:01:34.307725 systemd[1]: Starting systemd-udevd.service... Sep 6 00:01:34.329994 systemd-udevd[493]: Using default interface naming scheme 'v252'. Sep 6 00:01:34.333304 systemd[1]: Started systemd-udevd.service. Sep 6 00:01:34.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:34.338285 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:01:34.349814 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Sep 6 00:01:34.380849 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:01:34.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:34.382533 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:01:34.418944 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:01:34.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:34.452674 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 6 00:01:34.458298 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:01:34.458314 kernel: GPT:9289727 != 19775487 Sep 6 00:01:34.458323 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:01:34.458338 kernel: GPT:9289727 != 19775487 Sep 6 00:01:34.458346 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:01:34.458354 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:01:34.481103 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:01:34.481905 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:01:34.486354 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:01:34.487747 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (555) Sep 6 00:01:34.491433 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:01:34.499257 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:01:34.500890 systemd[1]: Starting disk-uuid.service... Sep 6 00:01:34.507135 disk-uuid[563]: Primary Header is updated. Sep 6 00:01:34.507135 disk-uuid[563]: Secondary Entries is updated. Sep 6 00:01:34.507135 disk-uuid[563]: Secondary Header is updated. Sep 6 00:01:34.510008 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:01:35.516854 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:01:35.517010 disk-uuid[564]: The operation has completed successfully. Sep 6 00:01:35.539518 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:01:35.540817 systemd[1]: Finished disk-uuid.service. Sep 6 00:01:35.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.546444 systemd[1]: Starting verity-setup.service... Sep 6 00:01:35.560859 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 00:01:35.585844 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:01:35.587980 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:01:35.589669 systemd[1]: Finished verity-setup.service. Sep 6 00:01:35.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.642867 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:01:35.643143 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:01:35.643844 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:01:35.644702 systemd[1]: Starting ignition-setup.service... Sep 6 00:01:35.646285 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:01:35.655098 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:01:35.655150 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:01:35.655160 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:01:35.668026 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:01:35.678076 systemd[1]: Finished ignition-setup.service. Sep 6 00:01:35.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.679694 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:01:35.739224 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:01:35.739949 ignition[660]: Ignition 2.14.0 Sep 6 00:01:35.739972 ignition[660]: Stage: fetch-offline Sep 6 00:01:35.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.740000 audit: BPF prog-id=9 op=LOAD Sep 6 00:01:35.741446 systemd[1]: Starting systemd-networkd.service... Sep 6 00:01:35.740010 ignition[660]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:01:35.740020 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:01:35.740158 ignition[660]: parsed url from cmdline: "" Sep 6 00:01:35.740161 ignition[660]: no config URL provided Sep 6 00:01:35.740166 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:01:35.740173 ignition[660]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:01:35.740190 ignition[660]: op(1): [started] loading QEMU firmware config module Sep 6 00:01:35.740195 ignition[660]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 6 00:01:35.745077 ignition[660]: op(1): [finished] loading QEMU firmware config module Sep 6 00:01:35.754214 ignition[660]: parsing config with SHA512: 8ed772a372b0f5f4a5e8c565811871e1878ff2ebb17e099d5d4377d1d273d8d3bdb412388cede730437e5e6b0defd9483d2cb80f67eed8097df73a28e58117e5 Sep 6 00:01:35.762896 unknown[660]: fetched base config from "system" Sep 6 00:01:35.762908 unknown[660]: fetched user config from "qemu" Sep 6 00:01:35.763284 ignition[660]: fetch-offline: fetch-offline passed Sep 6 00:01:35.764107 systemd-networkd[740]: lo: Link UP Sep 6 00:01:35.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.763348 ignition[660]: Ignition finished successfully Sep 6 00:01:35.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.764111 systemd-networkd[740]: lo: Gained carrier Sep 6 00:01:35.764489 systemd-networkd[740]: Enumeration completed Sep 6 00:01:35.764799 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:01:35.764818 systemd[1]: Started systemd-networkd.service. Sep 6 00:01:35.766692 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:01:35.766755 systemd-networkd[740]: eth0: Link UP Sep 6 00:01:35.766760 systemd-networkd[740]: eth0: Gained carrier Sep 6 00:01:35.767946 systemd[1]: Reached target network.target. Sep 6 00:01:35.769207 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 00:01:35.770103 systemd[1]: Starting ignition-kargs.service... Sep 6 00:01:35.771623 systemd[1]: Starting iscsiuio.service... Sep 6 00:01:35.778451 systemd[1]: Started iscsiuio.service. Sep 6 00:01:35.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.780157 systemd[1]: Starting iscsid.service... Sep 6 00:01:35.780242 ignition[745]: Ignition 2.14.0 Sep 6 00:01:35.780249 ignition[745]: Stage: kargs Sep 6 00:01:35.780348 ignition[745]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:01:35.780358 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:01:35.783799 systemd[1]: Finished ignition-kargs.service. Sep 6 00:01:35.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.782157 ignition[745]: kargs: kargs passed Sep 6 00:01:35.786759 iscsid[754]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:01:35.786759 iscsid[754]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:01:35.786759 iscsid[754]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:01:35.786759 iscsid[754]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:01:35.786759 iscsid[754]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:01:35.786759 iscsid[754]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:01:35.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.783932 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:01:35.782219 ignition[745]: Ignition finished successfully Sep 6 00:01:35.786487 systemd[1]: Starting ignition-disks.service... Sep 6 00:01:35.793617 ignition[755]: Ignition 2.14.0 Sep 6 00:01:35.787586 systemd[1]: Started iscsid.service. Sep 6 00:01:35.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.793624 ignition[755]: Stage: disks Sep 6 00:01:35.790390 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:01:35.793720 ignition[755]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:01:35.795443 systemd[1]: Finished ignition-disks.service. Sep 6 00:01:35.793730 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:01:35.796551 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:01:35.794492 ignition[755]: disks: disks passed Sep 6 00:01:35.797986 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:01:35.794538 ignition[755]: Ignition finished successfully Sep 6 00:01:35.799445 systemd[1]: Reached target local-fs.target. Sep 6 00:01:35.800929 systemd[1]: Reached target sysinit.target. Sep 6 00:01:35.802058 systemd[1]: Reached target basic.target. Sep 6 00:01:35.803557 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:01:35.804370 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:01:35.805476 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:01:35.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.806482 systemd[1]: Reached target remote-fs.target. Sep 6 00:01:35.808453 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:01:35.816235 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:01:35.818265 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:01:35.830689 systemd-fsck[777]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 6 00:01:35.834038 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:01:35.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.835814 systemd[1]: Mounting sysroot.mount... Sep 6 00:01:35.849882 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:01:35.850369 systemd[1]: Mounted sysroot.mount. Sep 6 00:01:35.851068 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:01:35.853163 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:01:35.853926 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:01:35.853971 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:01:35.853995 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:01:35.856261 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:01:35.858529 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:01:35.864413 initrd-setup-root[787]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:01:35.868108 initrd-setup-root[795]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:01:35.872025 initrd-setup-root[803]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:01:35.876452 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:01:35.905012 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:01:35.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.906515 systemd[1]: Starting ignition-mount.service... Sep 6 00:01:35.907831 systemd[1]: Starting sysroot-boot.service... Sep 6 00:01:35.912189 bash[828]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:01:35.921192 ignition[830]: INFO : Ignition 2.14.0 Sep 6 00:01:35.921192 ignition[830]: INFO : Stage: mount Sep 6 00:01:35.922708 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:01:35.922708 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:01:35.922708 ignition[830]: INFO : mount: mount passed Sep 6 00:01:35.922708 ignition[830]: INFO : Ignition finished successfully Sep 6 00:01:35.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:35.922761 systemd[1]: Finished ignition-mount.service. Sep 6 00:01:35.927046 systemd[1]: Finished sysroot-boot.service. Sep 6 00:01:35.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:36.605728 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:01:36.620506 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (838) Sep 6 00:01:36.620543 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:01:36.620553 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:01:36.621014 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:01:36.628507 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:01:36.630503 systemd[1]: Starting ignition-files.service... Sep 6 00:01:36.650830 ignition[858]: INFO : Ignition 2.14.0 Sep 6 00:01:36.650830 ignition[858]: INFO : Stage: files Sep 6 00:01:36.652228 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:01:36.652228 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:01:36.652228 ignition[858]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:01:36.655888 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:01:36.655888 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:01:36.658186 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:01:36.658186 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:01:36.660709 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:01:36.660709 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:01:36.660709 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:01:36.660709 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:01:36.660709 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:01:36.660709 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 00:01:36.660709 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 00:01:36.660709 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 00:01:36.660709 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 6 00:01:36.658576 unknown[858]: wrote ssh authorized keys file for user: core Sep 6 00:01:37.077269 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Sep 6 00:01:37.300377 systemd-networkd[740]: eth0: Gained IPv6LL Sep 6 00:01:37.506217 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 00:01:37.506217 ignition[858]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Sep 6 00:01:37.509340 ignition[858]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:01:37.509340 ignition[858]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:01:37.509340 ignition[858]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Sep 6 00:01:37.509340 ignition[858]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Sep 6 00:01:37.509340 ignition[858]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:01:37.532788 ignition[858]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:01:37.533934 ignition[858]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Sep 6 00:01:37.533934 ignition[858]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:01:37.533934 ignition[858]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:01:37.533934 ignition[858]: INFO : files: files passed Sep 6 00:01:37.533934 ignition[858]: INFO : Ignition finished successfully Sep 6 00:01:37.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.534179 systemd[1]: Finished ignition-files.service. Sep 6 00:01:37.536855 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:01:37.538124 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:01:37.539401 systemd[1]: Starting ignition-quench.service... Sep 6 00:01:37.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.545489 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 6 00:01:37.543423 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:01:37.547351 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:01:37.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.543515 systemd[1]: Finished ignition-quench.service. Sep 6 00:01:37.546728 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:01:37.548151 systemd[1]: Reached target ignition-complete.target. Sep 6 00:01:37.550394 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:01:37.562920 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:01:37.563015 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:01:37.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.564339 systemd[1]: Reached target initrd-fs.target. Sep 6 00:01:37.565495 systemd[1]: Reached target initrd.target. Sep 6 00:01:37.566498 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:01:37.567270 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:01:37.577800 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:01:37.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.579292 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:01:37.587438 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:01:37.588348 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:01:37.589669 systemd[1]: Stopped target timers.target. Sep 6 00:01:37.590706 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:01:37.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.590807 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:01:37.591914 systemd[1]: Stopped target initrd.target. Sep 6 00:01:37.593106 systemd[1]: Stopped target basic.target. Sep 6 00:01:37.594104 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:01:37.595150 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:01:37.596272 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:01:37.597446 systemd[1]: Stopped target remote-fs.target. Sep 6 00:01:37.598668 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:01:37.599970 systemd[1]: Stopped target sysinit.target. Sep 6 00:01:37.600999 systemd[1]: Stopped target local-fs.target. Sep 6 00:01:37.602141 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:01:37.603217 systemd[1]: Stopped target swap.target. Sep 6 00:01:37.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.604237 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:01:37.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.604346 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:01:37.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.606995 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:01:37.607635 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:01:37.607722 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:01:37.608933 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:01:37.609022 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:01:37.610264 systemd[1]: Stopped target paths.target. Sep 6 00:01:37.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.611240 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:01:37.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.612904 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:01:37.614275 systemd[1]: Stopped target slices.target. Sep 6 00:01:37.623811 iscsid[754]: iscsid shutting down. Sep 6 00:01:37.616248 systemd[1]: Stopped target sockets.target. Sep 6 00:01:37.617744 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:01:37.617873 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:01:37.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.619653 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:01:37.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.619744 systemd[1]: Stopped ignition-files.service. Sep 6 00:01:37.622074 systemd[1]: Stopping ignition-mount.service... Sep 6 00:01:37.634244 ignition[898]: INFO : Ignition 2.14.0 Sep 6 00:01:37.634244 ignition[898]: INFO : Stage: umount Sep 6 00:01:37.634244 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:01:37.634244 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:01:37.634244 ignition[898]: INFO : umount: umount passed Sep 6 00:01:37.634244 ignition[898]: INFO : Ignition finished successfully Sep 6 00:01:37.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.623284 systemd[1]: Stopping iscsid.service... Sep 6 00:01:37.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.624959 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:01:37.625951 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:01:37.626089 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:01:37.627711 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:01:37.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.627855 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:01:37.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.630948 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:01:37.631064 systemd[1]: Stopped iscsid.service. Sep 6 00:01:37.634011 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:01:37.634073 systemd[1]: Closed iscsid.socket. Sep 6 00:01:37.635579 systemd[1]: Stopping iscsiuio.service... Sep 6 00:01:37.637625 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:01:37.637849 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:01:37.641162 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:01:37.641624 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:01:37.641713 systemd[1]: Stopped iscsiuio.service. Sep 6 00:01:37.643760 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:01:37.643850 systemd[1]: Stopped ignition-mount.service. Sep 6 00:01:37.645292 systemd[1]: Stopped target network.target. Sep 6 00:01:37.648047 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:01:37.648089 systemd[1]: Closed iscsiuio.socket. Sep 6 00:01:37.651012 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:01:37.651066 systemd[1]: Stopped ignition-disks.service. Sep 6 00:01:37.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.653399 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:01:37.653441 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:01:37.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.655141 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:01:37.655181 systemd[1]: Stopped ignition-setup.service. Sep 6 00:01:37.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.655928 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:01:37.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.657203 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:01:37.663911 systemd-networkd[740]: eth0: DHCPv6 lease lost Sep 6 00:01:37.667634 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:01:37.667725 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:01:37.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.669079 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:01:37.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.669119 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:01:37.679000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:01:37.671204 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:01:37.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.671427 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:01:37.673000 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:01:37.685000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:01:37.673074 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:01:37.674131 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:01:37.674158 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:01:37.675755 systemd[1]: Stopping network-cleanup.service... Sep 6 00:01:37.676699 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:01:37.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.676749 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:01:37.678104 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:01:37.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.678140 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:01:37.679803 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:01:37.679997 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:01:37.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.681311 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:01:37.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.686177 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:01:37.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.688922 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:01:37.689027 systemd[1]: Stopped network-cleanup.service. Sep 6 00:01:37.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.691049 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:01:37.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.691166 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:01:37.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.692497 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:01:37.692534 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:01:37.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.693751 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:01:37.693782 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:01:37.694926 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:01:37.694967 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:01:37.696465 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:01:37.696500 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:01:37.697786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:01:37.697820 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:01:37.699756 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:01:37.700544 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 00:01:37.700611 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 6 00:01:37.702757 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:01:37.702793 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:01:37.703521 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:01:37.703556 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:01:37.705776 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 6 00:01:37.706238 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:01:37.706322 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:01:37.707432 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:01:37.709324 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:01:37.716127 systemd[1]: Switching root. Sep 6 00:01:37.730079 systemd-journald[290]: Journal stopped Sep 6 00:01:39.806983 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Sep 6 00:01:39.807038 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:01:39.807055 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:01:39.807066 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:01:39.807075 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:01:39.807086 kernel: SELinux: policy capability open_perms=1 Sep 6 00:01:39.807096 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:01:39.807106 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:01:39.807117 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:01:39.807127 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:01:39.807137 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:01:39.807147 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:01:39.807157 systemd[1]: Successfully loaded SELinux policy in 33.186ms. Sep 6 00:01:39.807176 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.479ms. Sep 6 00:01:39.807187 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:01:39.807199 systemd[1]: Detected virtualization kvm. Sep 6 00:01:39.807211 systemd[1]: Detected architecture arm64. Sep 6 00:01:39.807221 systemd[1]: Detected first boot. Sep 6 00:01:39.807232 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:01:39.807242 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:01:39.807253 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:01:39.807263 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:01:39.807274 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:01:39.807286 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:01:39.807298 kernel: kauditd_printk_skb: 80 callbacks suppressed Sep 6 00:01:39.807308 kernel: audit: type=1334 audit(1757116899.686:84): prog-id=12 op=LOAD Sep 6 00:01:39.807318 kernel: audit: type=1334 audit(1757116899.686:85): prog-id=3 op=UNLOAD Sep 6 00:01:39.807329 kernel: audit: type=1334 audit(1757116899.686:86): prog-id=13 op=LOAD Sep 6 00:01:39.807338 kernel: audit: type=1334 audit(1757116899.686:87): prog-id=14 op=LOAD Sep 6 00:01:39.807347 kernel: audit: type=1334 audit(1757116899.686:88): prog-id=4 op=UNLOAD Sep 6 00:01:39.807357 kernel: audit: type=1334 audit(1757116899.686:89): prog-id=5 op=UNLOAD Sep 6 00:01:39.807367 kernel: audit: type=1334 audit(1757116899.688:90): prog-id=15 op=LOAD Sep 6 00:01:39.807377 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:01:39.807388 kernel: audit: type=1334 audit(1757116899.688:91): prog-id=12 op=UNLOAD Sep 6 00:01:39.807398 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:01:39.807408 kernel: audit: type=1334 audit(1757116899.689:92): prog-id=16 op=LOAD Sep 6 00:01:39.807417 kernel: audit: type=1334 audit(1757116899.689:93): prog-id=17 op=LOAD Sep 6 00:01:39.807431 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:01:39.807442 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:01:39.807453 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:01:39.807465 systemd[1]: Created slice system-getty.slice. Sep 6 00:01:39.807476 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:01:39.807487 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:01:39.807498 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:01:39.807509 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:01:39.807519 systemd[1]: Created slice user.slice. Sep 6 00:01:39.807529 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:01:39.807540 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:01:39.807551 systemd[1]: Set up automount boot.automount. Sep 6 00:01:39.807572 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:01:39.807584 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:01:39.807598 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:01:39.807608 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:01:39.807620 systemd[1]: Reached target integritysetup.target. Sep 6 00:01:39.807631 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:01:39.807641 systemd[1]: Reached target remote-fs.target. Sep 6 00:01:39.807652 systemd[1]: Reached target slices.target. Sep 6 00:01:39.807662 systemd[1]: Reached target swap.target. Sep 6 00:01:39.807673 systemd[1]: Reached target torcx.target. Sep 6 00:01:39.807684 systemd[1]: Reached target veritysetup.target. Sep 6 00:01:39.807698 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:01:39.807708 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:01:39.807720 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:01:39.807732 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:01:39.807742 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:01:39.807753 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:01:39.807763 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:01:39.807775 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:01:39.807785 systemd[1]: Mounting media.mount... Sep 6 00:01:39.807796 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:01:39.807806 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:01:39.807816 systemd[1]: Mounting tmp.mount... Sep 6 00:01:39.807828 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:01:39.807848 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:01:39.807859 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:01:39.807870 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:01:39.807880 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:01:39.807890 systemd[1]: Starting modprobe@drm.service... Sep 6 00:01:39.807900 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:01:39.807911 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:01:39.807921 systemd[1]: Starting modprobe@loop.service... Sep 6 00:01:39.807933 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:01:39.807944 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:01:39.807954 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:01:39.807965 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:01:39.807975 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:01:39.807985 kernel: fuse: init (API version 7.34) Sep 6 00:01:39.807996 systemd[1]: Stopped systemd-journald.service. Sep 6 00:01:39.808008 systemd[1]: Starting systemd-journald.service... Sep 6 00:01:39.808018 kernel: loop: module loaded Sep 6 00:01:39.808029 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:01:39.808040 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:01:39.808051 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:01:39.808062 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:01:39.808073 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:01:39.808084 systemd[1]: Stopped verity-setup.service. Sep 6 00:01:39.808094 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:01:39.808104 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:01:39.808114 systemd[1]: Mounted media.mount. Sep 6 00:01:39.808124 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:01:39.808134 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:01:39.808144 systemd[1]: Mounted tmp.mount. Sep 6 00:01:39.808159 systemd-journald[1005]: Journal started Sep 6 00:01:39.808199 systemd-journald[1005]: Runtime Journal (/run/log/journal/2bc359670e1d44ef92a9eff7771e6383) is 6.0M, max 48.7M, 42.6M free. Sep 6 00:01:37.787000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:01:37.879000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:01:37.879000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:01:37.879000 audit: BPF prog-id=10 op=LOAD Sep 6 00:01:37.879000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:01:37.879000 audit: BPF prog-id=11 op=LOAD Sep 6 00:01:37.879000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:01:37.927000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:01:37.927000 audit[931]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:37.927000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:01:37.929000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:01:37.929000 audit[931]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5979 a2=1ed a3=0 items=2 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:37.929000 audit: CWD cwd="/" Sep 6 00:01:37.929000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:01:37.929000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:01:37.929000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:01:39.686000 audit: BPF prog-id=12 op=LOAD Sep 6 00:01:39.686000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:01:39.686000 audit: BPF prog-id=13 op=LOAD Sep 6 00:01:39.686000 audit: BPF prog-id=14 op=LOAD Sep 6 00:01:39.686000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:01:39.686000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:01:39.688000 audit: BPF prog-id=15 op=LOAD Sep 6 00:01:39.688000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:01:39.689000 audit: BPF prog-id=16 op=LOAD Sep 6 00:01:39.689000 audit: BPF prog-id=17 op=LOAD Sep 6 00:01:39.689000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:01:39.690000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:01:39.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.699000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:01:39.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.782000 audit: BPF prog-id=18 op=LOAD Sep 6 00:01:39.782000 audit: BPF prog-id=19 op=LOAD Sep 6 00:01:39.782000 audit: BPF prog-id=20 op=LOAD Sep 6 00:01:39.782000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:01:39.782000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:01:39.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.804000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:01:39.804000 audit[1005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffed02dff0 a2=4000 a3=1 items=0 ppid=1 pid=1005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:39.804000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:01:39.685936 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:01:37.925936 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:01:39.809353 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:01:39.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.685950 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 00:01:37.926263 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:01:39.691525 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:01:37.926282 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:01:37.926313 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:01:37.926323 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:01:37.926353 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:01:37.926365 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:01:37.926561 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:01:37.926609 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:01:37.926621 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:01:37.927680 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:01:37.927724 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:01:37.927743 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:01:37.927757 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:01:37.927774 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:01:39.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:37.927787 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:01:39.810846 systemd[1]: Started systemd-journald.service. Sep 6 00:01:39.419334 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:39Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:01:39.419613 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:39Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:01:39.419708 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:39Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:01:39.419884 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:39Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:01:39.419937 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:39Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:01:39.419998 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:01:39Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:01:39.811358 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:01:39.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.812261 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:01:39.812427 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:01:39.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.813373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:01:39.813528 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:01:39.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.814600 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:01:39.814742 systemd[1]: Finished modprobe@drm.service. Sep 6 00:01:39.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.815710 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:01:39.816015 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:01:39.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.816947 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:01:39.817102 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:01:39.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.817977 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:01:39.818127 systemd[1]: Finished modprobe@loop.service. Sep 6 00:01:39.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.819083 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:01:39.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.820198 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:01:39.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.821210 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:01:39.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.822339 systemd[1]: Reached target network-pre.target. Sep 6 00:01:39.824384 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:01:39.826122 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:01:39.826719 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:01:39.828414 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:01:39.830293 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:01:39.831157 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:01:39.834717 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:01:39.835647 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:01:39.836731 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:01:39.839616 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:01:39.841950 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:01:39.842995 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:01:39.843405 systemd-journald[1005]: Time spent on flushing to /var/log/journal/2bc359670e1d44ef92a9eff7771e6383 is 13.502ms for 981 entries. Sep 6 00:01:39.843405 systemd-journald[1005]: System Journal (/var/log/journal/2bc359670e1d44ef92a9eff7771e6383) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:01:39.881468 systemd-journald[1005]: Received client request to flush runtime journal. Sep 6 00:01:39.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.848059 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:01:39.849785 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:01:39.882073 udevadm[1032]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:01:39.850826 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:01:39.852740 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:01:39.859569 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:01:39.880511 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:01:39.882593 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:01:39.883688 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:01:39.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:39.904409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:01:39.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.223302 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:01:40.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.223000 audit: BPF prog-id=21 op=LOAD Sep 6 00:01:40.224000 audit: BPF prog-id=22 op=LOAD Sep 6 00:01:40.224000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:01:40.224000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:01:40.225499 systemd[1]: Starting systemd-udevd.service... Sep 6 00:01:40.241238 systemd-udevd[1036]: Using default interface naming scheme 'v252'. Sep 6 00:01:40.259474 systemd[1]: Started systemd-udevd.service. Sep 6 00:01:40.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.260000 audit: BPF prog-id=23 op=LOAD Sep 6 00:01:40.266239 systemd[1]: Starting systemd-networkd.service... Sep 6 00:01:40.269000 audit: BPF prog-id=24 op=LOAD Sep 6 00:01:40.270000 audit: BPF prog-id=25 op=LOAD Sep 6 00:01:40.270000 audit: BPF prog-id=26 op=LOAD Sep 6 00:01:40.271568 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:01:40.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.296125 systemd[1]: Started systemd-userdbd.service. Sep 6 00:01:40.297565 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 6 00:01:40.327422 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:01:40.347368 systemd-networkd[1055]: lo: Link UP Sep 6 00:01:40.347380 systemd-networkd[1055]: lo: Gained carrier Sep 6 00:01:40.347776 systemd-networkd[1055]: Enumeration completed Sep 6 00:01:40.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.347884 systemd[1]: Started systemd-networkd.service. Sep 6 00:01:40.349673 systemd-networkd[1055]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:01:40.350883 systemd-networkd[1055]: eth0: Link UP Sep 6 00:01:40.350892 systemd-networkd[1055]: eth0: Gained carrier Sep 6 00:01:40.366997 systemd-networkd[1055]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:01:40.377248 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:01:40.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.379221 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:01:40.391910 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:01:40.417852 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:01:40.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.418676 systemd[1]: Reached target cryptsetup.target. Sep 6 00:01:40.420537 systemd[1]: Starting lvm2-activation.service... Sep 6 00:01:40.424080 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:01:40.452808 systemd[1]: Finished lvm2-activation.service. Sep 6 00:01:40.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.453595 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:01:40.454308 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:01:40.454342 systemd[1]: Reached target local-fs.target. Sep 6 00:01:40.454936 systemd[1]: Reached target machines.target. Sep 6 00:01:40.456746 systemd[1]: Starting ldconfig.service... Sep 6 00:01:40.457781 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:01:40.457844 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:01:40.459029 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:01:40.460914 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:01:40.463381 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:01:40.465131 systemd[1]: Starting systemd-sysext.service... Sep 6 00:01:40.466455 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1072 (bootctl) Sep 6 00:01:40.467594 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:01:40.473881 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:01:40.476718 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:01:40.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.484756 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:01:40.484982 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:01:40.499882 kernel: loop0: detected capacity change from 0 to 207008 Sep 6 00:01:40.549077 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:01:40.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.555853 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:01:40.569256 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) Sep 6 00:01:40.569256 systemd-fsck[1081]: /dev/vda1: 236 files, 117310/258078 clusters Sep 6 00:01:40.571868 kernel: loop1: detected capacity change from 0 to 207008 Sep 6 00:01:40.573808 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:01:40.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.577425 (sd-sysext)[1085]: Using extensions 'kubernetes'. Sep 6 00:01:40.578014 (sd-sysext)[1085]: Merged extensions into '/usr'. Sep 6 00:01:40.596728 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:01:40.598095 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:01:40.599974 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:01:40.601980 systemd[1]: Starting modprobe@loop.service... Sep 6 00:01:40.602779 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:01:40.602921 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:01:40.603650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:01:40.603795 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:01:40.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.605093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:01:40.605218 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:01:40.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.606455 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:01:40.606577 systemd[1]: Finished modprobe@loop.service. Sep 6 00:01:40.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.607759 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:01:40.607879 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:01:40.669143 ldconfig[1071]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:01:40.672864 systemd[1]: Finished ldconfig.service. Sep 6 00:01:40.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.800340 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:01:40.803170 systemd[1]: Mounting boot.mount... Sep 6 00:01:40.806094 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:01:40.814619 systemd[1]: Mounted boot.mount. Sep 6 00:01:40.815805 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:01:40.818095 systemd[1]: Finished systemd-sysext.service. Sep 6 00:01:40.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.820313 systemd[1]: Starting ensure-sysext.service... Sep 6 00:01:40.822742 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:01:40.823892 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:01:40.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:40.828130 systemd[1]: Reloading. Sep 6 00:01:40.832651 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:01:40.833527 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:01:40.834955 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:01:40.866446 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-09-06T00:01:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:01:40.866478 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-09-06T00:01:40Z" level=info msg="torcx already run" Sep 6 00:01:40.931167 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:01:40.931186 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:01:40.949009 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:01:40.994000 audit: BPF prog-id=27 op=LOAD Sep 6 00:01:40.994000 audit: BPF prog-id=28 op=LOAD Sep 6 00:01:40.994000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:01:40.994000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:01:40.994000 audit: BPF prog-id=29 op=LOAD Sep 6 00:01:40.994000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:01:40.994000 audit: BPF prog-id=30 op=LOAD Sep 6 00:01:40.994000 audit: BPF prog-id=31 op=LOAD Sep 6 00:01:40.994000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:01:40.994000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:01:40.996000 audit: BPF prog-id=32 op=LOAD Sep 6 00:01:40.996000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:01:40.996000 audit: BPF prog-id=33 op=LOAD Sep 6 00:01:40.996000 audit: BPF prog-id=34 op=LOAD Sep 6 00:01:40.996000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:01:40.996000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:01:40.997000 audit: BPF prog-id=35 op=LOAD Sep 6 00:01:40.997000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:01:41.000396 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:01:41.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.004885 systemd[1]: Starting audit-rules.service... Sep 6 00:01:41.006546 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:01:41.009002 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:01:41.009000 audit: BPF prog-id=36 op=LOAD Sep 6 00:01:41.011000 audit: BPF prog-id=37 op=LOAD Sep 6 00:01:41.011207 systemd[1]: Starting systemd-resolved.service... Sep 6 00:01:41.013340 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:01:41.015393 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:01:41.016914 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:01:41.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.019000 audit[1163]: SYSTEM_BOOT pid=1163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.020046 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:01:41.023707 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:01:41.025027 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:01:41.027115 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:01:41.029466 systemd[1]: Starting modprobe@loop.service... Sep 6 00:01:41.030319 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:01:41.030443 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:01:41.030539 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:01:41.031668 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:01:41.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.032956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:01:41.033081 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:01:41.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.034065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:01:41.034172 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:01:41.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.035320 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:01:41.035446 systemd[1]: Finished modprobe@loop.service. Sep 6 00:01:41.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.039355 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:01:41.040771 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:01:41.043625 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:01:41.045896 systemd[1]: Starting modprobe@loop.service... Sep 6 00:01:41.046749 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:01:41.046893 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:01:41.046994 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:01:41.047983 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:01:41.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.049281 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:01:41.049458 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:01:41.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.050760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:01:41.050887 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:01:41.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.052037 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:01:41.052163 systemd[1]: Finished modprobe@loop.service. Sep 6 00:01:41.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.055295 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:01:41.056726 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:01:41.058939 systemd[1]: Starting modprobe@drm.service... Sep 6 00:01:41.060667 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:01:41.062943 systemd[1]: Starting modprobe@loop.service... Sep 6 00:01:41.063650 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:01:41.063769 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:01:41.065137 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:01:41.067212 systemd[1]: Starting systemd-update-done.service... Sep 6 00:01:41.067923 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:01:41.069294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:01:41.069450 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:01:41.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.070975 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:01:41.071107 systemd[1]: Finished modprobe@drm.service. Sep 6 00:01:41.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:01:41.072103 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:01:41.072657 systemd-timesyncd[1162]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 6 00:01:41.072711 systemd-timesyncd[1162]: Initial clock synchronization to Sat 2025-09-06 00:01:41.350422 UTC. Sep 6 00:01:41.073000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:01:41.073000 audit[1179]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffca17b070 a2=420 a3=0 items=0 ppid=1152 pid=1179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:01:41.073000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:01:41.074088 augenrules[1179]: No rules Sep 6 00:01:41.074743 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:01:41.074956 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:01:41.076051 systemd[1]: Finished audit-rules.service. Sep 6 00:01:41.077053 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:01:41.077168 systemd[1]: Finished modprobe@loop.service. Sep 6 00:01:41.078364 systemd[1]: Finished systemd-update-done.service. Sep 6 00:01:41.078527 systemd-resolved[1158]: Positive Trust Anchors: Sep 6 00:01:41.078724 systemd-resolved[1158]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:01:41.078800 systemd-resolved[1158]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:01:41.079779 systemd[1]: Reached target time-set.target. Sep 6 00:01:41.080813 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:01:41.080864 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:01:41.081140 systemd[1]: Finished ensure-sysext.service. Sep 6 00:01:41.088746 systemd-resolved[1158]: Defaulting to hostname 'linux'. Sep 6 00:01:41.090228 systemd[1]: Started systemd-resolved.service. Sep 6 00:01:41.090912 systemd[1]: Reached target network.target. Sep 6 00:01:41.091708 systemd[1]: Reached target nss-lookup.target. Sep 6 00:01:41.092374 systemd[1]: Reached target sysinit.target. Sep 6 00:01:41.093189 systemd[1]: Started motdgen.path. Sep 6 00:01:41.093740 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:01:41.094961 systemd[1]: Started logrotate.timer. Sep 6 00:01:41.095598 systemd[1]: Started mdadm.timer. Sep 6 00:01:41.096140 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:01:41.096752 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:01:41.096780 systemd[1]: Reached target paths.target. Sep 6 00:01:41.097380 systemd[1]: Reached target timers.target. Sep 6 00:01:41.098320 systemd[1]: Listening on dbus.socket. Sep 6 00:01:41.100055 systemd[1]: Starting docker.socket... Sep 6 00:01:41.103056 systemd[1]: Listening on sshd.socket. Sep 6 00:01:41.103742 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:01:41.104176 systemd[1]: Listening on docker.socket. Sep 6 00:01:41.104822 systemd[1]: Reached target sockets.target. Sep 6 00:01:41.105396 systemd[1]: Reached target basic.target. Sep 6 00:01:41.105991 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:01:41.106024 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:01:41.107037 systemd[1]: Starting containerd.service... Sep 6 00:01:41.108667 systemd[1]: Starting dbus.service... Sep 6 00:01:41.110304 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:01:41.112180 systemd[1]: Starting extend-filesystems.service... Sep 6 00:01:41.113029 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:01:41.114166 systemd[1]: Starting motdgen.service... Sep 6 00:01:41.115947 jq[1194]: false Sep 6 00:01:41.115926 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:01:41.117620 systemd[1]: Starting sshd-keygen.service... Sep 6 00:01:41.120711 systemd[1]: Starting systemd-logind.service... Sep 6 00:01:41.121692 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:01:41.121760 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:01:41.122313 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:01:41.123041 systemd[1]: Starting update-engine.service... Sep 6 00:01:41.125375 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:01:41.127802 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:01:41.128015 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:01:41.128488 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:01:41.128647 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:01:41.129633 jq[1208]: true Sep 6 00:01:41.130349 extend-filesystems[1195]: Found loop1 Sep 6 00:01:41.131747 extend-filesystems[1195]: Found vda Sep 6 00:01:41.131747 extend-filesystems[1195]: Found vda1 Sep 6 00:01:41.131747 extend-filesystems[1195]: Found vda2 Sep 6 00:01:41.131747 extend-filesystems[1195]: Found vda3 Sep 6 00:01:41.131747 extend-filesystems[1195]: Found usr Sep 6 00:01:41.131747 extend-filesystems[1195]: Found vda4 Sep 6 00:01:41.131747 extend-filesystems[1195]: Found vda6 Sep 6 00:01:41.131747 extend-filesystems[1195]: Found vda7 Sep 6 00:01:41.131747 extend-filesystems[1195]: Found vda9 Sep 6 00:01:41.131747 extend-filesystems[1195]: Checking size of /dev/vda9 Sep 6 00:01:41.141115 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:01:41.141463 jq[1214]: true Sep 6 00:01:41.141345 systemd[1]: Finished motdgen.service. Sep 6 00:01:41.150270 extend-filesystems[1195]: Resized partition /dev/vda9 Sep 6 00:01:41.158910 extend-filesystems[1230]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:01:41.163885 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 6 00:01:41.168002 dbus-daemon[1193]: [system] SELinux support is enabled Sep 6 00:01:41.168183 systemd[1]: Started dbus.service. Sep 6 00:01:41.171281 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:01:41.171305 systemd[1]: Reached target system-config.target. Sep 6 00:01:41.172039 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:01:41.172061 systemd[1]: Reached target user-config.target. Sep 6 00:01:41.188318 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) Sep 6 00:01:41.190129 systemd-logind[1203]: New seat seat0. Sep 6 00:01:41.195157 update_engine[1206]: I0906 00:01:41.194808 1206 main.cc:92] Flatcar Update Engine starting Sep 6 00:01:41.197953 env[1216]: time="2025-09-06T00:01:41.197901720Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:01:41.199085 update_engine[1206]: I0906 00:01:41.198987 1206 update_check_scheduler.cc:74] Next update check in 10m2s Sep 6 00:01:41.202058 systemd[1]: Started update-engine.service. Sep 6 00:01:41.202863 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 6 00:01:41.203080 systemd[1]: Started systemd-logind.service. Sep 6 00:01:41.224119 env[1216]: time="2025-09-06T00:01:41.215562560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:01:41.224119 env[1216]: time="2025-09-06T00:01:41.224042800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:01:41.205626 systemd[1]: Started locksmithd.service. Sep 6 00:01:41.224474 extend-filesystems[1230]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:01:41.224474 extend-filesystems[1230]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:01:41.224474 extend-filesystems[1230]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 6 00:01:41.230354 extend-filesystems[1195]: Resized filesystem in /dev/vda9 Sep 6 00:01:41.231654 bash[1241]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:01:41.225378 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:01:41.231785 env[1216]: time="2025-09-06T00:01:41.226247400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:01:41.231785 env[1216]: time="2025-09-06T00:01:41.226277160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:01:41.231785 env[1216]: time="2025-09-06T00:01:41.226713040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:01:41.231785 env[1216]: time="2025-09-06T00:01:41.226735560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:01:41.231785 env[1216]: time="2025-09-06T00:01:41.226749680Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:01:41.231785 env[1216]: time="2025-09-06T00:01:41.226759400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:01:41.231785 env[1216]: time="2025-09-06T00:01:41.228424040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:01:41.231785 env[1216]: time="2025-09-06T00:01:41.228739080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:01:41.231785 env[1216]: time="2025-09-06T00:01:41.231208440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:01:41.231785 env[1216]: time="2025-09-06T00:01:41.231231120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:01:41.225569 systemd[1]: Finished extend-filesystems.service. Sep 6 00:01:41.232078 env[1216]: time="2025-09-06T00:01:41.231306400Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:01:41.232078 env[1216]: time="2025-09-06T00:01:41.231318360Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:01:41.228468 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.237517680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.237555240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.237569640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.237602160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.237617760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.237631480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.237644160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.237982440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.238001720Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.238014480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.238026360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.238039560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.238158000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:01:41.238445 env[1216]: time="2025-09-06T00:01:41.238231680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:01:41.238882 env[1216]: time="2025-09-06T00:01:41.238859600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:01:41.238960 env[1216]: time="2025-09-06T00:01:41.238945040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:01:41.239027 env[1216]: time="2025-09-06T00:01:41.239011960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:01:41.239187 env[1216]: time="2025-09-06T00:01:41.239171600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:01:41.239263 env[1216]: time="2025-09-06T00:01:41.239247960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:01:41.239321 env[1216]: time="2025-09-06T00:01:41.239308200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:01:41.239378 env[1216]: time="2025-09-06T00:01:41.239363880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:01:41.239434 env[1216]: time="2025-09-06T00:01:41.239420200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:01:41.239493 env[1216]: time="2025-09-06T00:01:41.239478600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:01:41.239604 env[1216]: time="2025-09-06T00:01:41.239542880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:01:41.239682 env[1216]: time="2025-09-06T00:01:41.239665680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:01:41.239751 env[1216]: time="2025-09-06T00:01:41.239736280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:01:41.240129 env[1216]: time="2025-09-06T00:01:41.240105480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:01:41.240226 env[1216]: time="2025-09-06T00:01:41.240211120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:01:41.240397 env[1216]: time="2025-09-06T00:01:41.240379040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:01:41.240481 env[1216]: time="2025-09-06T00:01:41.240465800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:01:41.240574 env[1216]: time="2025-09-06T00:01:41.240530120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:01:41.240697 env[1216]: time="2025-09-06T00:01:41.240680680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:01:41.240769 env[1216]: time="2025-09-06T00:01:41.240754880Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:01:41.240881 env[1216]: time="2025-09-06T00:01:41.240865360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:01:41.241283 env[1216]: time="2025-09-06T00:01:41.241226600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:01:41.242147 env[1216]: time="2025-09-06T00:01:41.241831000Z" level=info msg="Connect containerd service" Sep 6 00:01:41.242363 env[1216]: time="2025-09-06T00:01:41.242241960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:01:41.243334 env[1216]: time="2025-09-06T00:01:41.243306080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:01:41.243845 env[1216]: time="2025-09-06T00:01:41.243775560Z" level=info msg="Start subscribing containerd event" Sep 6 00:01:41.243952 env[1216]: time="2025-09-06T00:01:41.243932760Z" level=info msg="Start recovering state" Sep 6 00:01:41.244023 env[1216]: time="2025-09-06T00:01:41.244005680Z" level=info msg="Start event monitor" Sep 6 00:01:41.244058 env[1216]: time="2025-09-06T00:01:41.244029440Z" level=info msg="Start snapshots syncer" Sep 6 00:01:41.244058 env[1216]: time="2025-09-06T00:01:41.244040280Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:01:41.244058 env[1216]: time="2025-09-06T00:01:41.244046840Z" level=info msg="Start streaming server" Sep 6 00:01:41.244194 env[1216]: time="2025-09-06T00:01:41.244174480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:01:41.244285 env[1216]: time="2025-09-06T00:01:41.244271240Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:01:41.244414 env[1216]: time="2025-09-06T00:01:41.244399720Z" level=info msg="containerd successfully booted in 0.048349s" Sep 6 00:01:41.244467 systemd[1]: Started containerd.service. Sep 6 00:01:41.254537 locksmithd[1245]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:01:41.716467 systemd-networkd[1055]: eth0: Gained IPv6LL Sep 6 00:01:41.719930 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:01:41.720929 systemd[1]: Reached target network-online.target. Sep 6 00:01:41.723509 systemd[1]: Starting kubelet.service... Sep 6 00:01:42.071411 sshd_keygen[1213]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:01:42.090136 systemd[1]: Finished sshd-keygen.service. Sep 6 00:01:42.092401 systemd[1]: Starting issuegen.service... Sep 6 00:01:42.097418 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:01:42.097583 systemd[1]: Finished issuegen.service. Sep 6 00:01:42.099796 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:01:42.106289 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:01:42.108547 systemd[1]: Started getty@tty1.service. Sep 6 00:01:42.110648 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 6 00:01:42.111777 systemd[1]: Reached target getty.target. Sep 6 00:01:42.402178 systemd[1]: Started kubelet.service. Sep 6 00:01:42.405195 systemd[1]: Reached target multi-user.target. Sep 6 00:01:42.409283 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:01:42.418529 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:01:42.418722 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:01:42.421209 systemd[1]: Startup finished in 585ms (kernel) + 4.186s (initrd) + 4.668s (userspace) = 9.440s. Sep 6 00:01:42.826042 kubelet[1271]: E0906 00:01:42.825859 1271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:01:42.828024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:01:42.828155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:01:46.122198 systemd[1]: Created slice system-sshd.slice. Sep 6 00:01:46.123290 systemd[1]: Started sshd@0-10.0.0.47:22-10.0.0.1:51740.service. Sep 6 00:01:46.170025 sshd[1281]: Accepted publickey for core from 10.0.0.1 port 51740 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:46.172358 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:46.180778 systemd[1]: Created slice user-500.slice. Sep 6 00:01:46.181951 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:01:46.183546 systemd-logind[1203]: New session 1 of user core. Sep 6 00:01:46.190315 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:01:46.191734 systemd[1]: Starting user@500.service... Sep 6 00:01:46.196641 (systemd)[1284]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:46.284403 systemd[1284]: Queued start job for default target default.target. Sep 6 00:01:46.284931 systemd[1284]: Reached target paths.target. Sep 6 00:01:46.284961 systemd[1284]: Reached target sockets.target. Sep 6 00:01:46.284973 systemd[1284]: Reached target timers.target. Sep 6 00:01:46.284982 systemd[1284]: Reached target basic.target. Sep 6 00:01:46.285022 systemd[1284]: Reached target default.target. Sep 6 00:01:46.285048 systemd[1284]: Startup finished in 81ms. Sep 6 00:01:46.285117 systemd[1]: Started user@500.service. Sep 6 00:01:46.286097 systemd[1]: Started session-1.scope. Sep 6 00:01:46.339147 systemd[1]: Started sshd@1-10.0.0.47:22-10.0.0.1:51750.service. Sep 6 00:01:46.403547 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 51750 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:46.404800 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:46.408230 systemd-logind[1203]: New session 2 of user core. Sep 6 00:01:46.409481 systemd[1]: Started session-2.scope. Sep 6 00:01:46.466144 sshd[1293]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:46.470395 systemd[1]: Started sshd@2-10.0.0.47:22-10.0.0.1:51760.service. Sep 6 00:01:46.470845 systemd[1]: sshd@1-10.0.0.47:22-10.0.0.1:51750.service: Deactivated successfully. Sep 6 00:01:46.471546 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:01:46.472062 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:01:46.472910 systemd-logind[1203]: Removed session 2. Sep 6 00:01:46.511586 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 51760 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:46.513164 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:46.519686 systemd-logind[1203]: New session 3 of user core. Sep 6 00:01:46.520329 systemd[1]: Started session-3.scope. Sep 6 00:01:46.574796 sshd[1298]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:46.578570 systemd[1]: sshd@2-10.0.0.47:22-10.0.0.1:51760.service: Deactivated successfully. Sep 6 00:01:46.579228 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:01:46.579791 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:01:46.580993 systemd[1]: Started sshd@3-10.0.0.47:22-10.0.0.1:51776.service. Sep 6 00:01:46.581789 systemd-logind[1203]: Removed session 3. Sep 6 00:01:46.622236 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 51776 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:46.623530 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:46.631855 systemd-logind[1203]: New session 4 of user core. Sep 6 00:01:46.632125 systemd[1]: Started session-4.scope. Sep 6 00:01:46.691424 sshd[1305]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:46.695927 systemd[1]: Started sshd@4-10.0.0.47:22-10.0.0.1:51778.service. Sep 6 00:01:46.697693 systemd[1]: sshd@3-10.0.0.47:22-10.0.0.1:51776.service: Deactivated successfully. Sep 6 00:01:46.698385 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:01:46.699122 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:01:46.700276 systemd-logind[1203]: Removed session 4. Sep 6 00:01:46.739103 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 51778 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:01:46.740813 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:01:46.748809 systemd-logind[1203]: New session 5 of user core. Sep 6 00:01:46.749307 systemd[1]: Started session-5.scope. Sep 6 00:01:46.827123 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:01:46.827361 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:01:46.841324 systemd[1]: Starting coreos-metadata.service... Sep 6 00:01:46.850272 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 6 00:01:46.850483 systemd[1]: Finished coreos-metadata.service. Sep 6 00:01:47.340216 systemd[1]: Stopped kubelet.service. Sep 6 00:01:47.342846 systemd[1]: Starting kubelet.service... Sep 6 00:01:47.371420 systemd[1]: Reloading. Sep 6 00:01:47.431355 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2025-09-06T00:01:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:01:47.431682 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2025-09-06T00:01:47Z" level=info msg="torcx already run" Sep 6 00:01:47.710373 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:01:47.710393 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:01:47.733173 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:01:47.816561 systemd[1]: Started kubelet.service. Sep 6 00:01:47.825052 systemd[1]: Stopping kubelet.service... Sep 6 00:01:47.826340 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:01:47.826627 systemd[1]: Stopped kubelet.service. Sep 6 00:01:47.828511 systemd[1]: Starting kubelet.service... Sep 6 00:01:47.927333 systemd[1]: Started kubelet.service. Sep 6 00:01:47.963444 kubelet[1422]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:01:47.963444 kubelet[1422]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:01:47.963444 kubelet[1422]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:01:47.963930 kubelet[1422]: I0906 00:01:47.963417 1422 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:01:49.332044 kubelet[1422]: I0906 00:01:49.332001 1422 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 00:01:49.332375 kubelet[1422]: I0906 00:01:49.332360 1422 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:01:49.332712 kubelet[1422]: I0906 00:01:49.332691 1422 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 00:01:49.351198 kubelet[1422]: I0906 00:01:49.351159 1422 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:01:49.361751 kubelet[1422]: E0906 00:01:49.361694 1422 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:01:49.361751 kubelet[1422]: I0906 00:01:49.361753 1422 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:01:49.366096 kubelet[1422]: I0906 00:01:49.366056 1422 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:01:49.366842 kubelet[1422]: I0906 00:01:49.366792 1422 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:01:49.367040 kubelet[1422]: I0906 00:01:49.366839 1422 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.47","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:01:49.367154 kubelet[1422]: I0906 00:01:49.367097 1422 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:01:49.367154 kubelet[1422]: I0906 00:01:49.367108 1422 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 00:01:49.367612 kubelet[1422]: I0906 00:01:49.367566 1422 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:01:49.371583 kubelet[1422]: I0906 00:01:49.371551 1422 kubelet.go:446] "Attempting to sync node with API server" Sep 6 00:01:49.371583 kubelet[1422]: I0906 00:01:49.371579 1422 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:01:49.371684 kubelet[1422]: I0906 00:01:49.371601 1422 kubelet.go:352] "Adding apiserver pod source" Sep 6 00:01:49.371684 kubelet[1422]: I0906 00:01:49.371616 1422 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:01:49.372784 kubelet[1422]: E0906 00:01:49.372018 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:01:49.377677 kubelet[1422]: I0906 00:01:49.377649 1422 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:01:49.378478 kubelet[1422]: I0906 00:01:49.378357 1422 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:01:49.378766 kubelet[1422]: W0906 00:01:49.378748 1422 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:01:49.379491 kubelet[1422]: E0906 00:01:49.379457 1422 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:01:49.379932 kubelet[1422]: I0906 00:01:49.379915 1422 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:01:49.379970 kubelet[1422]: I0906 00:01:49.379954 1422 server.go:1287] "Started kubelet" Sep 6 00:01:49.397200 kubelet[1422]: I0906 00:01:49.397133 1422 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:01:49.397449 kubelet[1422]: W0906 00:01:49.397413 1422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.47" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Sep 6 00:01:49.397510 kubelet[1422]: E0906 00:01:49.397449 1422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.47\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 6 00:01:49.397510 kubelet[1422]: W0906 00:01:49.397495 1422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Sep 6 00:01:49.397510 kubelet[1422]: E0906 00:01:49.397506 1422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 6 00:01:49.397767 kubelet[1422]: I0906 00:01:49.397746 1422 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:01:49.397920 kubelet[1422]: I0906 00:01:49.397898 1422 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:01:49.398382 kubelet[1422]: E0906 00:01:49.398361 1422 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:01:49.399325 kubelet[1422]: I0906 00:01:49.399305 1422 server.go:479] "Adding debug handlers to kubelet server" Sep 6 00:01:49.400429 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:01:49.400739 kubelet[1422]: I0906 00:01:49.400715 1422 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:01:49.400789 kubelet[1422]: I0906 00:01:49.400744 1422 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:01:49.401342 kubelet[1422]: I0906 00:01:49.401250 1422 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:01:49.401994 kubelet[1422]: E0906 00:01:49.401970 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.47\" not found" Sep 6 00:01:49.402395 kubelet[1422]: I0906 00:01:49.402328 1422 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:01:49.402395 kubelet[1422]: I0906 00:01:49.402391 1422 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:01:49.402662 kubelet[1422]: I0906 00:01:49.402639 1422 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:01:49.402889 kubelet[1422]: I0906 00:01:49.402843 1422 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:01:49.404351 kubelet[1422]: I0906 00:01:49.404332 1422 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:01:49.415776 kubelet[1422]: I0906 00:01:49.415747 1422 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:01:49.415776 kubelet[1422]: I0906 00:01:49.415768 1422 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:01:49.415930 kubelet[1422]: I0906 00:01:49.415790 1422 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:01:49.429522 kubelet[1422]: E0906 00:01:49.429475 1422 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.47\" not found" node="10.0.0.47" Sep 6 00:01:49.502799 kubelet[1422]: E0906 00:01:49.502734 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.47\" not found" Sep 6 00:01:49.509769 kubelet[1422]: I0906 00:01:49.509618 1422 policy_none.go:49] "None policy: Start" Sep 6 00:01:49.509769 kubelet[1422]: I0906 00:01:49.509644 1422 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:01:49.509769 kubelet[1422]: I0906 00:01:49.509656 1422 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:01:49.514600 systemd[1]: Created slice kubepods.slice. Sep 6 00:01:49.521589 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:01:49.524138 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:01:49.536700 kubelet[1422]: I0906 00:01:49.536615 1422 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:01:49.537674 kubelet[1422]: I0906 00:01:49.537142 1422 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:01:49.537674 kubelet[1422]: I0906 00:01:49.537171 1422 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:01:49.538132 kubelet[1422]: I0906 00:01:49.538108 1422 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:01:49.540518 kubelet[1422]: E0906 00:01:49.540127 1422 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:01:49.540518 kubelet[1422]: E0906 00:01:49.540170 1422 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.47\" not found" Sep 6 00:01:49.585693 kubelet[1422]: I0906 00:01:49.585573 1422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:01:49.587031 kubelet[1422]: I0906 00:01:49.587010 1422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:01:49.587146 kubelet[1422]: I0906 00:01:49.587135 1422 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 00:01:49.587240 kubelet[1422]: I0906 00:01:49.587228 1422 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:01:49.587306 kubelet[1422]: I0906 00:01:49.587297 1422 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 00:01:49.587444 kubelet[1422]: E0906 00:01:49.587429 1422 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 6 00:01:49.638588 kubelet[1422]: I0906 00:01:49.638542 1422 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.47" Sep 6 00:01:49.644345 kubelet[1422]: I0906 00:01:49.644314 1422 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.47" Sep 6 00:01:49.644517 kubelet[1422]: E0906 00:01:49.644501 1422 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.47\": node \"10.0.0.47\" not found" Sep 6 00:01:49.657348 kubelet[1422]: E0906 00:01:49.657294 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.47\" not found" Sep 6 00:01:49.757544 kubelet[1422]: E0906 00:01:49.757497 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.47\" not found" Sep 6 00:01:49.858055 kubelet[1422]: E0906 00:01:49.857908 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.47\" not found" Sep 6 00:01:49.959030 kubelet[1422]: E0906 00:01:49.958979 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.47\" not found" Sep 6 00:01:50.023930 sudo[1314]: pam_unix(sudo:session): session closed for user root Sep 6 00:01:50.025408 sshd[1310]: pam_unix(sshd:session): session closed for user core Sep 6 00:01:50.029949 systemd[1]: sshd@4-10.0.0.47:22-10.0.0.1:51778.service: Deactivated successfully. Sep 6 00:01:50.030611 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:01:50.031118 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:01:50.031715 systemd-logind[1203]: Removed session 5. Sep 6 00:01:50.059667 kubelet[1422]: E0906 00:01:50.059612 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.47\" not found" Sep 6 00:01:50.160565 kubelet[1422]: E0906 00:01:50.160451 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.47\" not found" Sep 6 00:01:50.261582 kubelet[1422]: I0906 00:01:50.261541 1422 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 6 00:01:50.261902 env[1216]: time="2025-09-06T00:01:50.261863122Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:01:50.262315 kubelet[1422]: I0906 00:01:50.262297 1422 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 6 00:01:50.334377 kubelet[1422]: I0906 00:01:50.334339 1422 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 6 00:01:50.334959 kubelet[1422]: W0906 00:01:50.334931 1422 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 00:01:50.334959 kubelet[1422]: W0906 00:01:50.334935 1422 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 00:01:50.335067 kubelet[1422]: W0906 00:01:50.334985 1422 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 00:01:50.335150 kubelet[1422]: W0906 00:01:50.335134 1422 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 00:01:50.372401 kubelet[1422]: E0906 00:01:50.372351 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:01:50.372588 kubelet[1422]: I0906 00:01:50.372568 1422 apiserver.go:52] "Watching apiserver" Sep 6 00:01:50.390866 systemd[1]: Created slice kubepods-besteffort-pod7b580fb9_66b1_4d0a_9cf5_874d145005b2.slice. Sep 6 00:01:50.404294 kubelet[1422]: I0906 00:01:50.404253 1422 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:01:50.408781 kubelet[1422]: I0906 00:01:50.408749 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cilium-run\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.408922 kubelet[1422]: I0906 00:01:50.408905 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-lib-modules\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.409010 kubelet[1422]: I0906 00:01:50.408994 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k29mq\" (UniqueName: \"kubernetes.io/projected/eb58470b-5f8c-48af-9e30-2fc4aec8545e-kube-api-access-k29mq\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.409084 kubelet[1422]: I0906 00:01:50.409069 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjhzm\" (UniqueName: \"kubernetes.io/projected/7b580fb9-66b1-4d0a-9cf5-874d145005b2-kube-api-access-hjhzm\") pod \"kube-proxy-t5xc4\" (UID: \"7b580fb9-66b1-4d0a-9cf5-874d145005b2\") " pod="kube-system/kube-proxy-t5xc4" Sep 6 00:01:50.409244 kubelet[1422]: I0906 00:01:50.409228 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-hostproc\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.409323 kubelet[1422]: I0906 00:01:50.409310 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b580fb9-66b1-4d0a-9cf5-874d145005b2-lib-modules\") pod \"kube-proxy-t5xc4\" (UID: \"7b580fb9-66b1-4d0a-9cf5-874d145005b2\") " pod="kube-system/kube-proxy-t5xc4" Sep 6 00:01:50.409390 kubelet[1422]: I0906 00:01:50.409376 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cilium-cgroup\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.409458 kubelet[1422]: I0906 00:01:50.409444 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cni-path\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.409532 kubelet[1422]: I0906 00:01:50.409514 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-etc-cni-netd\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.409633 kubelet[1422]: I0906 00:01:50.409616 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cilium-config-path\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.409709 kubelet[1422]: I0906 00:01:50.409692 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-host-proc-sys-net\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.409782 kubelet[1422]: I0906 00:01:50.409769 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb58470b-5f8c-48af-9e30-2fc4aec8545e-hubble-tls\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.409885 kubelet[1422]: I0906 00:01:50.409870 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b580fb9-66b1-4d0a-9cf5-874d145005b2-kube-proxy\") pod \"kube-proxy-t5xc4\" (UID: \"7b580fb9-66b1-4d0a-9cf5-874d145005b2\") " pod="kube-system/kube-proxy-t5xc4" Sep 6 00:01:50.409958 kubelet[1422]: I0906 00:01:50.409945 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-bpf-maps\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.410047 kubelet[1422]: I0906 00:01:50.410033 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-xtables-lock\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.410133 kubelet[1422]: I0906 00:01:50.410119 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb58470b-5f8c-48af-9e30-2fc4aec8545e-clustermesh-secrets\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.410201 kubelet[1422]: I0906 00:01:50.410189 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-host-proc-sys-kernel\") pod \"cilium-gxm6k\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " pod="kube-system/cilium-gxm6k" Sep 6 00:01:50.410278 kubelet[1422]: I0906 00:01:50.410265 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b580fb9-66b1-4d0a-9cf5-874d145005b2-xtables-lock\") pod \"kube-proxy-t5xc4\" (UID: \"7b580fb9-66b1-4d0a-9cf5-874d145005b2\") " pod="kube-system/kube-proxy-t5xc4" Sep 6 00:01:50.418342 systemd[1]: Created slice kubepods-burstable-podeb58470b_5f8c_48af_9e30_2fc4aec8545e.slice. Sep 6 00:01:50.512430 kubelet[1422]: I0906 00:01:50.512375 1422 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:01:50.717113 kubelet[1422]: E0906 00:01:50.717002 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:01:50.718448 env[1216]: time="2025-09-06T00:01:50.718123577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5xc4,Uid:7b580fb9-66b1-4d0a-9cf5-874d145005b2,Namespace:kube-system,Attempt:0,}" Sep 6 00:01:50.730450 kubelet[1422]: E0906 00:01:50.730418 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:01:50.731056 env[1216]: time="2025-09-06T00:01:50.731018240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gxm6k,Uid:eb58470b-5f8c-48af-9e30-2fc4aec8545e,Namespace:kube-system,Attempt:0,}" Sep 6 00:01:51.291762 env[1216]: time="2025-09-06T00:01:51.291673447Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:51.294976 env[1216]: time="2025-09-06T00:01:51.294930383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:51.300728 env[1216]: time="2025-09-06T00:01:51.300678099Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:51.303080 env[1216]: time="2025-09-06T00:01:51.303038410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:51.306470 env[1216]: time="2025-09-06T00:01:51.306433050Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:51.308209 env[1216]: time="2025-09-06T00:01:51.308172303Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:51.311331 env[1216]: time="2025-09-06T00:01:51.311293194Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:51.315420 env[1216]: time="2025-09-06T00:01:51.315381118Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:51.351903 env[1216]: time="2025-09-06T00:01:51.350905743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:01:51.351903 env[1216]: time="2025-09-06T00:01:51.350971301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:01:51.351903 env[1216]: time="2025-09-06T00:01:51.350983669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:01:51.351903 env[1216]: time="2025-09-06T00:01:51.351304706Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b pid=1484 runtime=io.containerd.runc.v2 Sep 6 00:01:51.357050 env[1216]: time="2025-09-06T00:01:51.356966049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:01:51.357050 env[1216]: time="2025-09-06T00:01:51.357012368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:01:51.357050 env[1216]: time="2025-09-06T00:01:51.357031162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:01:51.357750 env[1216]: time="2025-09-06T00:01:51.357690617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70da2292ddd17a3265b79207556148f9d824f28ab8666a7b75ce964907f7b8a9 pid=1486 runtime=io.containerd.runc.v2 Sep 6 00:01:51.373410 kubelet[1422]: E0906 00:01:51.373262 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:01:51.375349 systemd[1]: Started cri-containerd-2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b.scope. Sep 6 00:01:51.376572 systemd[1]: Started cri-containerd-70da2292ddd17a3265b79207556148f9d824f28ab8666a7b75ce964907f7b8a9.scope. Sep 6 00:01:51.412627 env[1216]: time="2025-09-06T00:01:51.412577408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gxm6k,Uid:eb58470b-5f8c-48af-9e30-2fc4aec8545e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\"" Sep 6 00:01:51.415082 kubelet[1422]: E0906 00:01:51.415033 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:01:51.420364 env[1216]: time="2025-09-06T00:01:51.418532305Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:01:51.426162 env[1216]: time="2025-09-06T00:01:51.426119065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5xc4,Uid:7b580fb9-66b1-4d0a-9cf5-874d145005b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"70da2292ddd17a3265b79207556148f9d824f28ab8666a7b75ce964907f7b8a9\"" Sep 6 00:01:51.427217 kubelet[1422]: E0906 00:01:51.426715 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:01:51.518375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088895175.mount: Deactivated successfully. Sep 6 00:01:52.373810 kubelet[1422]: E0906 00:01:52.373763 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:01:53.374923 kubelet[1422]: E0906 00:01:53.374861 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:01:54.375091 kubelet[1422]: E0906 00:01:54.375012 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:01:55.375936 kubelet[1422]: E0906 00:01:55.375896 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:01:55.696469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4155467651.mount: Deactivated successfully. Sep 6 00:01:56.376774 kubelet[1422]: E0906 00:01:56.376733 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:01:57.384333 kubelet[1422]: E0906 00:01:57.377030 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:01:58.133046 env[1216]: time="2025-09-06T00:01:58.132995146Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:58.134355 env[1216]: time="2025-09-06T00:01:58.134317989Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:58.136683 env[1216]: time="2025-09-06T00:01:58.136647783Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:01:58.137444 env[1216]: time="2025-09-06T00:01:58.137370052Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 6 00:01:58.140777 env[1216]: time="2025-09-06T00:01:58.140731780Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 6 00:01:58.142164 env[1216]: time="2025-09-06T00:01:58.142120893Z" level=info msg="CreateContainer within sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:01:58.162672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575123996.mount: Deactivated successfully. Sep 6 00:01:58.172017 env[1216]: time="2025-09-06T00:01:58.171969023Z" level=info msg="CreateContainer within sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7\"" Sep 6 00:01:58.173021 env[1216]: time="2025-09-06T00:01:58.172960111Z" level=info msg="StartContainer for \"ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7\"" Sep 6 00:01:58.198563 systemd[1]: Started cri-containerd-ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7.scope. Sep 6 00:01:58.235891 env[1216]: time="2025-09-06T00:01:58.235832019Z" level=info msg="StartContainer for \"ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7\" returns successfully" Sep 6 00:01:58.241327 systemd[1]: cri-containerd-ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7.scope: Deactivated successfully. Sep 6 00:01:58.340930 env[1216]: time="2025-09-06T00:01:58.340867511Z" level=info msg="shim disconnected" id=ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7 Sep 6 00:01:58.340930 env[1216]: time="2025-09-06T00:01:58.340914141Z" level=warning msg="cleaning up after shim disconnected" id=ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7 namespace=k8s.io Sep 6 00:01:58.340930 env[1216]: time="2025-09-06T00:01:58.340923941Z" level=info msg="cleaning up dead shim" Sep 6 00:01:58.348971 env[1216]: time="2025-09-06T00:01:58.348928831Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:01:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1604 runtime=io.containerd.runc.v2\n" Sep 6 00:01:58.377326 kubelet[1422]: E0906 00:01:58.377270 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:01:58.608716 kubelet[1422]: E0906 00:01:58.608635 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:01:58.611031 env[1216]: time="2025-09-06T00:01:58.610983017Z" level=info msg="CreateContainer within sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:01:58.646440 env[1216]: time="2025-09-06T00:01:58.646387879Z" level=info msg="CreateContainer within sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2\"" Sep 6 00:01:58.649709 env[1216]: time="2025-09-06T00:01:58.649524930Z" level=info msg="StartContainer for \"094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2\"" Sep 6 00:01:58.676331 systemd[1]: Started cri-containerd-094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2.scope. Sep 6 00:01:58.720895 env[1216]: time="2025-09-06T00:01:58.720475308Z" level=info msg="StartContainer for \"094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2\" returns successfully" Sep 6 00:01:58.739384 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:01:58.739583 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:01:58.740874 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:01:58.743368 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:01:58.746072 systemd[1]: cri-containerd-094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2.scope: Deactivated successfully. Sep 6 00:01:58.756623 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:01:58.772165 env[1216]: time="2025-09-06T00:01:58.772116193Z" level=info msg="shim disconnected" id=094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2 Sep 6 00:01:58.772417 env[1216]: time="2025-09-06T00:01:58.772396217Z" level=warning msg="cleaning up after shim disconnected" id=094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2 namespace=k8s.io Sep 6 00:01:58.772499 env[1216]: time="2025-09-06T00:01:58.772482971Z" level=info msg="cleaning up dead shim" Sep 6 00:01:58.781754 env[1216]: time="2025-09-06T00:01:58.781708766Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:01:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1669 runtime=io.containerd.runc.v2\n" Sep 6 00:01:59.158184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7-rootfs.mount: Deactivated successfully. Sep 6 00:01:59.378064 kubelet[1422]: E0906 00:01:59.377976 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:01:59.506133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount225190442.mount: Deactivated successfully. Sep 6 00:01:59.611567 kubelet[1422]: E0906 00:01:59.611536 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:01:59.614895 env[1216]: time="2025-09-06T00:01:59.614824334Z" level=info msg="CreateContainer within sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:01:59.637507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1053770211.mount: Deactivated successfully. Sep 6 00:01:59.646466 env[1216]: time="2025-09-06T00:01:59.646413105Z" level=info msg="CreateContainer within sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a\"" Sep 6 00:01:59.647297 env[1216]: time="2025-09-06T00:01:59.647264710Z" level=info msg="StartContainer for \"f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a\"" Sep 6 00:01:59.667872 systemd[1]: Started cri-containerd-f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a.scope. Sep 6 00:01:59.700770 env[1216]: time="2025-09-06T00:01:59.700719413Z" level=info msg="StartContainer for \"f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a\" returns successfully" Sep 6 00:01:59.701945 systemd[1]: cri-containerd-f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a.scope: Deactivated successfully. Sep 6 00:01:59.818939 env[1216]: time="2025-09-06T00:01:59.818889597Z" level=info msg="shim disconnected" id=f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a Sep 6 00:01:59.819226 env[1216]: time="2025-09-06T00:01:59.819203559Z" level=warning msg="cleaning up after shim disconnected" id=f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a namespace=k8s.io Sep 6 00:01:59.819300 env[1216]: time="2025-09-06T00:01:59.819285171Z" level=info msg="cleaning up dead shim" Sep 6 00:01:59.826111 env[1216]: time="2025-09-06T00:01:59.826067980Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:01:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1726 runtime=io.containerd.runc.v2\n" Sep 6 00:02:00.006374 env[1216]: time="2025-09-06T00:02:00.006316899Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:00.007745 env[1216]: time="2025-09-06T00:02:00.007711062Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:00.009527 env[1216]: time="2025-09-06T00:02:00.009473378Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:00.010726 env[1216]: time="2025-09-06T00:02:00.010696485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:00.011405 env[1216]: time="2025-09-06T00:02:00.011374407Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 6 00:02:00.013665 env[1216]: time="2025-09-06T00:02:00.013632995Z" level=info msg="CreateContainer within sandbox \"70da2292ddd17a3265b79207556148f9d824f28ab8666a7b75ce964907f7b8a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:02:00.030258 env[1216]: time="2025-09-06T00:02:00.030168504Z" level=info msg="CreateContainer within sandbox \"70da2292ddd17a3265b79207556148f9d824f28ab8666a7b75ce964907f7b8a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82c77373047f08a3733bb2d027d2d008e592b42b867350155a96a4e1486f9c9a\"" Sep 6 00:02:00.031085 env[1216]: time="2025-09-06T00:02:00.031050264Z" level=info msg="StartContainer for \"82c77373047f08a3733bb2d027d2d008e592b42b867350155a96a4e1486f9c9a\"" Sep 6 00:02:00.048787 systemd[1]: Started cri-containerd-82c77373047f08a3733bb2d027d2d008e592b42b867350155a96a4e1486f9c9a.scope. Sep 6 00:02:00.081486 env[1216]: time="2025-09-06T00:02:00.081332345Z" level=info msg="StartContainer for \"82c77373047f08a3733bb2d027d2d008e592b42b867350155a96a4e1486f9c9a\" returns successfully" Sep 6 00:02:00.378660 kubelet[1422]: E0906 00:02:00.378526 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:00.614558 kubelet[1422]: E0906 00:02:00.614527 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:00.617311 kubelet[1422]: E0906 00:02:00.617285 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:00.619310 env[1216]: time="2025-09-06T00:02:00.619267568Z" level=info msg="CreateContainer within sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:02:00.626909 kubelet[1422]: I0906 00:02:00.626812 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t5xc4" podStartSLOduration=3.042036255 podStartE2EDuration="11.626795045s" podCreationTimestamp="2025-09-06 00:01:49 +0000 UTC" firstStartedPulling="2025-09-06 00:01:51.427523458 +0000 UTC m=+3.496781949" lastFinishedPulling="2025-09-06 00:02:00.012282248 +0000 UTC m=+12.081540739" observedRunningTime="2025-09-06 00:02:00.624943731 +0000 UTC m=+12.694202223" watchObservedRunningTime="2025-09-06 00:02:00.626795045 +0000 UTC m=+12.696053536" Sep 6 00:02:00.636934 env[1216]: time="2025-09-06T00:02:00.636776001Z" level=info msg="CreateContainer within sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a\"" Sep 6 00:02:00.637642 env[1216]: time="2025-09-06T00:02:00.637612258Z" level=info msg="StartContainer for \"64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a\"" Sep 6 00:02:00.659897 systemd[1]: Started cri-containerd-64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a.scope. Sep 6 00:02:00.696294 systemd[1]: cri-containerd-64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a.scope: Deactivated successfully. Sep 6 00:02:00.696955 env[1216]: time="2025-09-06T00:02:00.696744155Z" level=info msg="StartContainer for \"64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a\" returns successfully" Sep 6 00:02:00.756705 env[1216]: time="2025-09-06T00:02:00.756652001Z" level=info msg="shim disconnected" id=64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a Sep 6 00:02:00.756705 env[1216]: time="2025-09-06T00:02:00.756703442Z" level=warning msg="cleaning up after shim disconnected" id=64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a namespace=k8s.io Sep 6 00:02:00.756705 env[1216]: time="2025-09-06T00:02:00.756713273Z" level=info msg="cleaning up dead shim" Sep 6 00:02:00.763900 env[1216]: time="2025-09-06T00:02:00.763851292Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:02:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1951 runtime=io.containerd.runc.v2\n" Sep 6 00:02:01.158228 systemd[1]: run-containerd-runc-k8s.io-64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a-runc.97pyvy.mount: Deactivated successfully. Sep 6 00:02:01.158326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a-rootfs.mount: Deactivated successfully. Sep 6 00:02:01.379326 kubelet[1422]: E0906 00:02:01.379276 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:01.620664 kubelet[1422]: E0906 00:02:01.620632 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:01.621089 kubelet[1422]: E0906 00:02:01.621011 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:01.622581 env[1216]: time="2025-09-06T00:02:01.622546933Z" level=info msg="CreateContainer within sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:02:01.636487 env[1216]: time="2025-09-06T00:02:01.636302937Z" level=info msg="CreateContainer within sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\"" Sep 6 00:02:01.637771 env[1216]: time="2025-09-06T00:02:01.637734820Z" level=info msg="StartContainer for \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\"" Sep 6 00:02:01.654691 systemd[1]: Started cri-containerd-a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759.scope. Sep 6 00:02:01.687367 env[1216]: time="2025-09-06T00:02:01.687306819Z" level=info msg="StartContainer for \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\" returns successfully" Sep 6 00:02:01.825349 kubelet[1422]: I0906 00:02:01.825306 1422 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 00:02:01.854863 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 00:02:02.122877 kernel: Initializing XFRM netlink socket Sep 6 00:02:02.124868 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 00:02:02.158278 systemd[1]: run-containerd-runc-k8s.io-a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759-runc.YlTwEb.mount: Deactivated successfully. Sep 6 00:02:02.379632 kubelet[1422]: E0906 00:02:02.379498 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:02.626421 kubelet[1422]: E0906 00:02:02.626394 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:02.654545 kubelet[1422]: I0906 00:02:02.654402 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gxm6k" podStartSLOduration=6.932733361 podStartE2EDuration="13.654383031s" podCreationTimestamp="2025-09-06 00:01:49 +0000 UTC" firstStartedPulling="2025-09-06 00:01:51.418058812 +0000 UTC m=+3.487317303" lastFinishedPulling="2025-09-06 00:01:58.139708482 +0000 UTC m=+10.208966973" observedRunningTime="2025-09-06 00:02:02.64762635 +0000 UTC m=+14.716884841" watchObservedRunningTime="2025-09-06 00:02:02.654383031 +0000 UTC m=+14.723641522" Sep 6 00:02:03.380518 kubelet[1422]: E0906 00:02:03.379784 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:03.630197 kubelet[1422]: E0906 00:02:03.629218 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:03.760548 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 00:02:03.760630 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:02:03.755247 systemd-networkd[1055]: cilium_host: Link UP Sep 6 00:02:03.755454 systemd-networkd[1055]: cilium_net: Link UP Sep 6 00:02:03.755726 systemd-networkd[1055]: cilium_net: Gained carrier Sep 6 00:02:03.761051 systemd-networkd[1055]: cilium_host: Gained carrier Sep 6 00:02:03.854388 systemd-networkd[1055]: cilium_vxlan: Link UP Sep 6 00:02:03.854395 systemd-networkd[1055]: cilium_vxlan: Gained carrier Sep 6 00:02:04.004029 systemd-networkd[1055]: cilium_host: Gained IPv6LL Sep 6 00:02:04.149567 kernel: NET: Registered PF_ALG protocol family Sep 6 00:02:04.381567 kubelet[1422]: E0906 00:02:04.381528 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:04.628978 systemd-networkd[1055]: cilium_net: Gained IPv6LL Sep 6 00:02:04.630638 kubelet[1422]: E0906 00:02:04.630594 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:04.767113 systemd-networkd[1055]: lxc_health: Link UP Sep 6 00:02:04.768621 systemd-networkd[1055]: lxc_health: Gained carrier Sep 6 00:02:04.769217 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:02:05.140073 systemd-networkd[1055]: cilium_vxlan: Gained IPv6LL Sep 6 00:02:05.382524 kubelet[1422]: E0906 00:02:05.382482 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:05.908001 systemd-networkd[1055]: lxc_health: Gained IPv6LL Sep 6 00:02:06.193664 systemd[1]: Created slice kubepods-besteffort-pod0034bb71_14c5_44e9_9aa7_7a7852ee4eba.slice. Sep 6 00:02:06.221748 kubelet[1422]: I0906 00:02:06.221704 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwzr6\" (UniqueName: \"kubernetes.io/projected/0034bb71-14c5-44e9-9aa7-7a7852ee4eba-kube-api-access-rwzr6\") pod \"nginx-deployment-7fcdb87857-jlf99\" (UID: \"0034bb71-14c5-44e9-9aa7-7a7852ee4eba\") " pod="default/nginx-deployment-7fcdb87857-jlf99" Sep 6 00:02:06.382658 kubelet[1422]: E0906 00:02:06.382584 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:06.497399 env[1216]: time="2025-09-06T00:02:06.497273713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-jlf99,Uid:0034bb71-14c5-44e9-9aa7-7a7852ee4eba,Namespace:default,Attempt:0,}" Sep 6 00:02:06.573892 kernel: eth0: renamed from tmpe1909 Sep 6 00:02:06.572355 systemd-networkd[1055]: lxc7359ac248201: Link UP Sep 6 00:02:06.582692 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:02:06.582808 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7359ac248201: link becomes ready Sep 6 00:02:06.583022 systemd-networkd[1055]: lxc7359ac248201: Gained carrier Sep 6 00:02:06.730654 kubelet[1422]: E0906 00:02:06.730603 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:07.382795 kubelet[1422]: E0906 00:02:07.382726 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:07.634533 kubelet[1422]: E0906 00:02:07.634419 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:07.635985 systemd-networkd[1055]: lxc7359ac248201: Gained IPv6LL Sep 6 00:02:08.383833 kubelet[1422]: E0906 00:02:08.383782 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:08.636939 kubelet[1422]: E0906 00:02:08.636671 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:09.372314 kubelet[1422]: E0906 00:02:09.372265 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:09.384842 kubelet[1422]: E0906 00:02:09.384801 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:09.385544 env[1216]: time="2025-09-06T00:02:09.385456810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:02:09.385544 env[1216]: time="2025-09-06T00:02:09.385516226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:02:09.385544 env[1216]: time="2025-09-06T00:02:09.385527517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:02:09.386067 env[1216]: time="2025-09-06T00:02:09.386025787Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e19093fc9e0bd76bee34c3a4083837fa6330fae519177783293594148e4abc46 pid=2491 runtime=io.containerd.runc.v2 Sep 6 00:02:09.398991 systemd[1]: Started cri-containerd-e19093fc9e0bd76bee34c3a4083837fa6330fae519177783293594148e4abc46.scope. Sep 6 00:02:09.417157 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:02:09.433713 env[1216]: time="2025-09-06T00:02:09.433671156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-jlf99,Uid:0034bb71-14c5-44e9-9aa7-7a7852ee4eba,Namespace:default,Attempt:0,} returns sandbox id \"e19093fc9e0bd76bee34c3a4083837fa6330fae519177783293594148e4abc46\"" Sep 6 00:02:09.434949 env[1216]: time="2025-09-06T00:02:09.434913327Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 6 00:02:10.384957 kubelet[1422]: E0906 00:02:10.384907 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:11.385231 kubelet[1422]: E0906 00:02:11.385180 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:11.771526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1875613580.mount: Deactivated successfully. Sep 6 00:02:12.385947 kubelet[1422]: E0906 00:02:12.385902 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:13.044621 env[1216]: time="2025-09-06T00:02:13.044572732Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:13.046662 env[1216]: time="2025-09-06T00:02:13.046620344Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:13.047975 env[1216]: time="2025-09-06T00:02:13.047945637Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:13.049474 env[1216]: time="2025-09-06T00:02:13.049439943Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:13.051024 env[1216]: time="2025-09-06T00:02:13.050992442Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 6 00:02:13.052945 env[1216]: time="2025-09-06T00:02:13.052911143Z" level=info msg="CreateContainer within sandbox \"e19093fc9e0bd76bee34c3a4083837fa6330fae519177783293594148e4abc46\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 6 00:02:13.062314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322278586.mount: Deactivated successfully. Sep 6 00:02:13.066291 env[1216]: time="2025-09-06T00:02:13.066247558Z" level=info msg="CreateContainer within sandbox \"e19093fc9e0bd76bee34c3a4083837fa6330fae519177783293594148e4abc46\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"208898f5201c8579d86d4362be4f9e48c47992ac6f44b873a64309bead3cbef9\"" Sep 6 00:02:13.066956 env[1216]: time="2025-09-06T00:02:13.066869862Z" level=info msg="StartContainer for \"208898f5201c8579d86d4362be4f9e48c47992ac6f44b873a64309bead3cbef9\"" Sep 6 00:02:13.088753 systemd[1]: Started cri-containerd-208898f5201c8579d86d4362be4f9e48c47992ac6f44b873a64309bead3cbef9.scope. Sep 6 00:02:13.115193 env[1216]: time="2025-09-06T00:02:13.115141435Z" level=info msg="StartContainer for \"208898f5201c8579d86d4362be4f9e48c47992ac6f44b873a64309bead3cbef9\" returns successfully" Sep 6 00:02:13.386913 kubelet[1422]: E0906 00:02:13.386873 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:13.663192 kubelet[1422]: I0906 00:02:13.663010 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-jlf99" podStartSLOduration=4.0457531 podStartE2EDuration="7.662996105s" podCreationTimestamp="2025-09-06 00:02:06 +0000 UTC" firstStartedPulling="2025-09-06 00:02:09.434485604 +0000 UTC m=+21.503744095" lastFinishedPulling="2025-09-06 00:02:13.051728609 +0000 UTC m=+25.120987100" observedRunningTime="2025-09-06 00:02:13.66261963 +0000 UTC m=+25.731878121" watchObservedRunningTime="2025-09-06 00:02:13.662996105 +0000 UTC m=+25.732254596" Sep 6 00:02:14.387564 kubelet[1422]: E0906 00:02:14.387511 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:15.388540 kubelet[1422]: E0906 00:02:15.388488 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:16.389577 kubelet[1422]: E0906 00:02:16.389479 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:17.390538 kubelet[1422]: E0906 00:02:17.390494 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:17.588726 systemd[1]: Created slice kubepods-besteffort-pod91ec9198_2405_424e_8782_050037649e81.slice. Sep 6 00:02:17.701094 kubelet[1422]: I0906 00:02:17.700723 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j699g\" (UniqueName: \"kubernetes.io/projected/91ec9198-2405-424e-8782-050037649e81-kube-api-access-j699g\") pod \"nfs-server-provisioner-0\" (UID: \"91ec9198-2405-424e-8782-050037649e81\") " pod="default/nfs-server-provisioner-0" Sep 6 00:02:17.701094 kubelet[1422]: I0906 00:02:17.700779 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/91ec9198-2405-424e-8782-050037649e81-data\") pod \"nfs-server-provisioner-0\" (UID: \"91ec9198-2405-424e-8782-050037649e81\") " pod="default/nfs-server-provisioner-0" Sep 6 00:02:17.891861 env[1216]: time="2025-09-06T00:02:17.891788667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:91ec9198-2405-424e-8782-050037649e81,Namespace:default,Attempt:0,}" Sep 6 00:02:17.933947 systemd-networkd[1055]: lxc884aea795188: Link UP Sep 6 00:02:17.942050 kernel: eth0: renamed from tmpe403e Sep 6 00:02:17.952861 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:02:17.952973 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc884aea795188: link becomes ready Sep 6 00:02:17.953808 systemd-networkd[1055]: lxc884aea795188: Gained carrier Sep 6 00:02:18.130872 env[1216]: time="2025-09-06T00:02:18.130801643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:02:18.131039 env[1216]: time="2025-09-06T00:02:18.130854430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:02:18.131039 env[1216]: time="2025-09-06T00:02:18.131022916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:02:18.131295 env[1216]: time="2025-09-06T00:02:18.131259997Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e403e850c6a5757b09eee74a3d9635c043edbb3a124d38424c6a0ac49f69e0a0 pid=2620 runtime=io.containerd.runc.v2 Sep 6 00:02:18.144927 systemd[1]: Started cri-containerd-e403e850c6a5757b09eee74a3d9635c043edbb3a124d38424c6a0ac49f69e0a0.scope. Sep 6 00:02:18.164567 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:02:18.179236 env[1216]: time="2025-09-06T00:02:18.179200984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:91ec9198-2405-424e-8782-050037649e81,Namespace:default,Attempt:0,} returns sandbox id \"e403e850c6a5757b09eee74a3d9635c043edbb3a124d38424c6a0ac49f69e0a0\"" Sep 6 00:02:18.180652 env[1216]: time="2025-09-06T00:02:18.180625671Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 6 00:02:18.391730 kubelet[1422]: E0906 00:02:18.391674 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:19.348295 systemd-networkd[1055]: lxc884aea795188: Gained IPv6LL Sep 6 00:02:19.392749 kubelet[1422]: E0906 00:02:19.392687 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:20.323238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960971258.mount: Deactivated successfully. Sep 6 00:02:20.393455 kubelet[1422]: E0906 00:02:20.393391 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:21.394581 kubelet[1422]: E0906 00:02:21.394500 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:22.209921 env[1216]: time="2025-09-06T00:02:22.209868170Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:22.213755 env[1216]: time="2025-09-06T00:02:22.213692219Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:22.221413 env[1216]: time="2025-09-06T00:02:22.220871966Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:22.223973 env[1216]: time="2025-09-06T00:02:22.223769435Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Sep 6 00:02:22.224201 env[1216]: time="2025-09-06T00:02:22.222948699Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:22.228255 env[1216]: time="2025-09-06T00:02:22.228210618Z" level=info msg="CreateContainer within sandbox \"e403e850c6a5757b09eee74a3d9635c043edbb3a124d38424c6a0ac49f69e0a0\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 6 00:02:22.244155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1076319500.mount: Deactivated successfully. Sep 6 00:02:22.252243 env[1216]: time="2025-09-06T00:02:22.252190861Z" level=info msg="CreateContainer within sandbox \"e403e850c6a5757b09eee74a3d9635c043edbb3a124d38424c6a0ac49f69e0a0\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9cab7bc8f25dbb5bb0293ecfa230f838ef188f7a8af934ca034d356b0673250a\"" Sep 6 00:02:22.253190 env[1216]: time="2025-09-06T00:02:22.253151976Z" level=info msg="StartContainer for \"9cab7bc8f25dbb5bb0293ecfa230f838ef188f7a8af934ca034d356b0673250a\"" Sep 6 00:02:22.278734 systemd[1]: Started cri-containerd-9cab7bc8f25dbb5bb0293ecfa230f838ef188f7a8af934ca034d356b0673250a.scope. Sep 6 00:02:22.307446 env[1216]: time="2025-09-06T00:02:22.307351542Z" level=info msg="StartContainer for \"9cab7bc8f25dbb5bb0293ecfa230f838ef188f7a8af934ca034d356b0673250a\" returns successfully" Sep 6 00:02:22.395130 kubelet[1422]: E0906 00:02:22.395070 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:22.699792 kubelet[1422]: I0906 00:02:22.699655 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.654326716 podStartE2EDuration="5.69963796s" podCreationTimestamp="2025-09-06 00:02:17 +0000 UTC" firstStartedPulling="2025-09-06 00:02:18.180318355 +0000 UTC m=+30.249576846" lastFinishedPulling="2025-09-06 00:02:22.225629599 +0000 UTC m=+34.294888090" observedRunningTime="2025-09-06 00:02:22.696989432 +0000 UTC m=+34.766247923" watchObservedRunningTime="2025-09-06 00:02:22.69963796 +0000 UTC m=+34.768896451" Sep 6 00:02:23.395902 kubelet[1422]: E0906 00:02:23.395857 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:24.397347 kubelet[1422]: E0906 00:02:24.397298 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:25.397966 kubelet[1422]: E0906 00:02:25.397926 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:25.985539 update_engine[1206]: I0906 00:02:25.985124 1206 update_attempter.cc:509] Updating boot flags... Sep 6 00:02:26.399467 kubelet[1422]: E0906 00:02:26.399396 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:27.400016 kubelet[1422]: E0906 00:02:27.399961 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:28.400757 kubelet[1422]: E0906 00:02:28.400705 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:29.371935 kubelet[1422]: E0906 00:02:29.371896 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:29.401108 kubelet[1422]: E0906 00:02:29.401053 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:30.401449 kubelet[1422]: E0906 00:02:30.401409 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:31.402943 kubelet[1422]: E0906 00:02:31.402889 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:32.219297 systemd[1]: Created slice kubepods-besteffort-podaf722769_b3a7_40c0_a12f_dc42d3148b32.slice. Sep 6 00:02:32.308797 kubelet[1422]: I0906 00:02:32.308681 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7hbx\" (UniqueName: \"kubernetes.io/projected/af722769-b3a7-40c0-a12f-dc42d3148b32-kube-api-access-b7hbx\") pod \"test-pod-1\" (UID: \"af722769-b3a7-40c0-a12f-dc42d3148b32\") " pod="default/test-pod-1" Sep 6 00:02:32.308797 kubelet[1422]: I0906 00:02:32.308730 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-31c97bf0-44e5-44c9-b89b-c319aa4bea52\" (UniqueName: \"kubernetes.io/nfs/af722769-b3a7-40c0-a12f-dc42d3148b32-pvc-31c97bf0-44e5-44c9-b89b-c319aa4bea52\") pod \"test-pod-1\" (UID: \"af722769-b3a7-40c0-a12f-dc42d3148b32\") " pod="default/test-pod-1" Sep 6 00:02:32.403404 kubelet[1422]: E0906 00:02:32.403290 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:32.450871 kernel: FS-Cache: Loaded Sep 6 00:02:32.480085 kernel: RPC: Registered named UNIX socket transport module. Sep 6 00:02:32.480245 kernel: RPC: Registered udp transport module. Sep 6 00:02:32.480280 kernel: RPC: Registered tcp transport module. Sep 6 00:02:32.480302 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 6 00:02:32.533868 kernel: FS-Cache: Netfs 'nfs' registered for caching Sep 6 00:02:32.676187 kernel: NFS: Registering the id_resolver key type Sep 6 00:02:32.676346 kernel: Key type id_resolver registered Sep 6 00:02:32.676373 kernel: Key type id_legacy registered Sep 6 00:02:32.716057 nfsidmap[2753]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 6 00:02:32.721806 nfsidmap[2756]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 6 00:02:32.827381 env[1216]: time="2025-09-06T00:02:32.827319905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:af722769-b3a7-40c0-a12f-dc42d3148b32,Namespace:default,Attempt:0,}" Sep 6 00:02:32.860310 systemd-networkd[1055]: lxc9748dfbbc99f: Link UP Sep 6 00:02:32.872876 kernel: eth0: renamed from tmp7d124 Sep 6 00:02:32.884888 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:02:32.885012 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9748dfbbc99f: link becomes ready Sep 6 00:02:32.885029 systemd-networkd[1055]: lxc9748dfbbc99f: Gained carrier Sep 6 00:02:33.070565 env[1216]: time="2025-09-06T00:02:33.070490347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:02:33.070565 env[1216]: time="2025-09-06T00:02:33.070540519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:02:33.070565 env[1216]: time="2025-09-06T00:02:33.070550602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:02:33.070773 env[1216]: time="2025-09-06T00:02:33.070723203Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d12496479fcc27cf83f572ff2cf6389eefe6bcb1bfa87152bb3d5638ae8dc39 pid=2791 runtime=io.containerd.runc.v2 Sep 6 00:02:33.081161 systemd[1]: Started cri-containerd-7d12496479fcc27cf83f572ff2cf6389eefe6bcb1bfa87152bb3d5638ae8dc39.scope. Sep 6 00:02:33.101195 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:02:33.121459 env[1216]: time="2025-09-06T00:02:33.121395943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:af722769-b3a7-40c0-a12f-dc42d3148b32,Namespace:default,Attempt:0,} returns sandbox id \"7d12496479fcc27cf83f572ff2cf6389eefe6bcb1bfa87152bb3d5638ae8dc39\"" Sep 6 00:02:33.122507 env[1216]: time="2025-09-06T00:02:33.122477560Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 6 00:02:33.404224 kubelet[1422]: E0906 00:02:33.403769 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:33.406092 env[1216]: time="2025-09-06T00:02:33.406042406Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:33.408029 env[1216]: time="2025-09-06T00:02:33.407987709Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:33.409877 env[1216]: time="2025-09-06T00:02:33.409832428Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:33.415447 env[1216]: time="2025-09-06T00:02:33.415402474Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:33.416395 env[1216]: time="2025-09-06T00:02:33.416325174Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 6 00:02:33.418893 env[1216]: time="2025-09-06T00:02:33.418843253Z" level=info msg="CreateContainer within sandbox \"7d12496479fcc27cf83f572ff2cf6389eefe6bcb1bfa87152bb3d5638ae8dc39\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 6 00:02:33.437551 env[1216]: time="2025-09-06T00:02:33.437472246Z" level=info msg="CreateContainer within sandbox \"7d12496479fcc27cf83f572ff2cf6389eefe6bcb1bfa87152bb3d5638ae8dc39\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f7b380c809e17f37cd1df9c75277bd69ed2ce31572b83296c8c5a153cbcecd6d\"" Sep 6 00:02:33.438009 env[1216]: time="2025-09-06T00:02:33.437987089Z" level=info msg="StartContainer for \"f7b380c809e17f37cd1df9c75277bd69ed2ce31572b83296c8c5a153cbcecd6d\"" Sep 6 00:02:33.456773 systemd[1]: Started cri-containerd-f7b380c809e17f37cd1df9c75277bd69ed2ce31572b83296c8c5a153cbcecd6d.scope. Sep 6 00:02:33.499007 env[1216]: time="2025-09-06T00:02:33.498954279Z" level=info msg="StartContainer for \"f7b380c809e17f37cd1df9c75277bd69ed2ce31572b83296c8c5a153cbcecd6d\" returns successfully" Sep 6 00:02:34.004132 systemd-networkd[1055]: lxc9748dfbbc99f: Gained IPv6LL Sep 6 00:02:34.404855 kubelet[1422]: E0906 00:02:34.404777 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:34.426422 systemd[1]: run-containerd-runc-k8s.io-f7b380c809e17f37cd1df9c75277bd69ed2ce31572b83296c8c5a153cbcecd6d-runc.Ek8NIL.mount: Deactivated successfully. Sep 6 00:02:35.405199 kubelet[1422]: E0906 00:02:35.405139 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:36.405732 kubelet[1422]: E0906 00:02:36.405686 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:37.407113 kubelet[1422]: E0906 00:02:37.407079 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:38.408095 kubelet[1422]: E0906 00:02:38.408057 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:39.408550 kubelet[1422]: E0906 00:02:39.408498 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:40.409119 kubelet[1422]: E0906 00:02:40.409071 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:41.362426 kubelet[1422]: I0906 00:02:41.362351 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=24.067016862 podStartE2EDuration="24.362331984s" podCreationTimestamp="2025-09-06 00:02:17 +0000 UTC" firstStartedPulling="2025-09-06 00:02:33.122198894 +0000 UTC m=+45.191457385" lastFinishedPulling="2025-09-06 00:02:33.417514016 +0000 UTC m=+45.486772507" observedRunningTime="2025-09-06 00:02:33.717658529 +0000 UTC m=+45.786917020" watchObservedRunningTime="2025-09-06 00:02:41.362331984 +0000 UTC m=+53.431590475" Sep 6 00:02:41.401149 env[1216]: time="2025-09-06T00:02:41.400938535Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:02:41.407086 env[1216]: time="2025-09-06T00:02:41.407033655Z" level=info msg="StopContainer for \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\" with timeout 2 (s)" Sep 6 00:02:41.407532 env[1216]: time="2025-09-06T00:02:41.407492853Z" level=info msg="Stop container \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\" with signal terminated" Sep 6 00:02:41.410056 kubelet[1422]: E0906 00:02:41.409963 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:41.421549 systemd-networkd[1055]: lxc_health: Link DOWN Sep 6 00:02:41.421553 systemd-networkd[1055]: lxc_health: Lost carrier Sep 6 00:02:41.460219 systemd[1]: cri-containerd-a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759.scope: Deactivated successfully. Sep 6 00:02:41.460531 systemd[1]: cri-containerd-a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759.scope: Consumed 6.377s CPU time. Sep 6 00:02:41.482267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759-rootfs.mount: Deactivated successfully. Sep 6 00:02:41.492634 env[1216]: time="2025-09-06T00:02:41.492574057Z" level=info msg="shim disconnected" id=a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759 Sep 6 00:02:41.492634 env[1216]: time="2025-09-06T00:02:41.492622585Z" level=warning msg="cleaning up after shim disconnected" id=a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759 namespace=k8s.io Sep 6 00:02:41.492634 env[1216]: time="2025-09-06T00:02:41.492633307Z" level=info msg="cleaning up dead shim" Sep 6 00:02:41.500010 env[1216]: time="2025-09-06T00:02:41.499965038Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:02:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2924 runtime=io.containerd.runc.v2\n" Sep 6 00:02:41.502358 env[1216]: time="2025-09-06T00:02:41.502320241Z" level=info msg="StopContainer for \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\" returns successfully" Sep 6 00:02:41.502967 env[1216]: time="2025-09-06T00:02:41.502937746Z" level=info msg="StopPodSandbox for \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\"" Sep 6 00:02:41.503018 env[1216]: time="2025-09-06T00:02:41.502998876Z" level=info msg="Container to stop \"094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:02:41.503056 env[1216]: time="2025-09-06T00:02:41.503013279Z" level=info msg="Container to stop \"f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:02:41.503056 env[1216]: time="2025-09-06T00:02:41.503025441Z" level=info msg="Container to stop \"64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:02:41.503056 env[1216]: time="2025-09-06T00:02:41.503038763Z" level=info msg="Container to stop \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:02:41.503056 env[1216]: time="2025-09-06T00:02:41.503052005Z" level=info msg="Container to stop \"ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:02:41.504763 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b-shm.mount: Deactivated successfully. Sep 6 00:02:41.510936 systemd[1]: cri-containerd-2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b.scope: Deactivated successfully. Sep 6 00:02:41.531581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b-rootfs.mount: Deactivated successfully. Sep 6 00:02:41.537495 env[1216]: time="2025-09-06T00:02:41.537431394Z" level=info msg="shim disconnected" id=2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b Sep 6 00:02:41.537495 env[1216]: time="2025-09-06T00:02:41.537483603Z" level=warning msg="cleaning up after shim disconnected" id=2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b namespace=k8s.io Sep 6 00:02:41.537495 env[1216]: time="2025-09-06T00:02:41.537493165Z" level=info msg="cleaning up dead shim" Sep 6 00:02:41.545052 env[1216]: time="2025-09-06T00:02:41.545008847Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:02:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2956 runtime=io.containerd.runc.v2\n" Sep 6 00:02:41.545342 env[1216]: time="2025-09-06T00:02:41.545314340Z" level=info msg="TearDown network for sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" successfully" Sep 6 00:02:41.545376 env[1216]: time="2025-09-06T00:02:41.545341544Z" level=info msg="StopPodSandbox for \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" returns successfully" Sep 6 00:02:41.666936 kubelet[1422]: I0906 00:02:41.665988 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-hostproc\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.666936 kubelet[1422]: I0906 00:02:41.666019 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-hostproc" (OuterVolumeSpecName: "hostproc") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:41.666936 kubelet[1422]: I0906 00:02:41.666069 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-xtables-lock\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.666936 kubelet[1422]: I0906 00:02:41.666093 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cni-path\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.666936 kubelet[1422]: I0906 00:02:41.666159 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:41.666936 kubelet[1422]: I0906 00:02:41.666200 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cilium-config-path\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.667462 kubelet[1422]: I0906 00:02:41.666217 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-host-proc-sys-net\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.667462 kubelet[1422]: I0906 00:02:41.666223 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cni-path" (OuterVolumeSpecName: "cni-path") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:41.667462 kubelet[1422]: I0906 00:02:41.666249 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:41.667462 kubelet[1422]: I0906 00:02:41.666342 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:41.667462 kubelet[1422]: I0906 00:02:41.666231 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cilium-cgroup\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.667667 kubelet[1422]: I0906 00:02:41.666896 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-etc-cni-netd\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.667667 kubelet[1422]: I0906 00:02:41.666914 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-host-proc-sys-kernel\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.667667 kubelet[1422]: I0906 00:02:41.666934 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cilium-run\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.667667 kubelet[1422]: I0906 00:02:41.666962 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:41.667667 kubelet[1422]: I0906 00:02:41.666982 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:41.667793 kubelet[1422]: I0906 00:02:41.667006 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-lib-modules\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.667793 kubelet[1422]: I0906 00:02:41.667024 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k29mq\" (UniqueName: \"kubernetes.io/projected/eb58470b-5f8c-48af-9e30-2fc4aec8545e-kube-api-access-k29mq\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.667793 kubelet[1422]: I0906 00:02:41.667041 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb58470b-5f8c-48af-9e30-2fc4aec8545e-hubble-tls\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.667793 kubelet[1422]: I0906 00:02:41.667558 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-bpf-maps\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.667793 kubelet[1422]: I0906 00:02:41.667581 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb58470b-5f8c-48af-9e30-2fc4aec8545e-clustermesh-secrets\") pod \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\" (UID: \"eb58470b-5f8c-48af-9e30-2fc4aec8545e\") " Sep 6 00:02:41.667793 kubelet[1422]: I0906 00:02:41.667612 1422 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-hostproc\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:41.667793 kubelet[1422]: I0906 00:02:41.667623 1422 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cni-path\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:41.667996 kubelet[1422]: I0906 00:02:41.667630 1422 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-host-proc-sys-net\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:41.667996 kubelet[1422]: I0906 00:02:41.667641 1422 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-xtables-lock\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:41.667996 kubelet[1422]: I0906 00:02:41.667649 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cilium-cgroup\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:41.667996 kubelet[1422]: I0906 00:02:41.667658 1422 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-etc-cni-netd\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:41.667996 kubelet[1422]: I0906 00:02:41.667665 1422 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-host-proc-sys-kernel\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:41.667996 kubelet[1422]: I0906 00:02:41.667060 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:41.667996 kubelet[1422]: I0906 00:02:41.667071 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:41.668174 kubelet[1422]: I0906 00:02:41.667968 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:41.668548 kubelet[1422]: I0906 00:02:41.668507 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:02:41.671356 systemd[1]: var-lib-kubelet-pods-eb58470b\x2d5f8c\x2d48af\x2d9e30\x2d2fc4aec8545e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk29mq.mount: Deactivated successfully. Sep 6 00:02:41.672362 kubelet[1422]: I0906 00:02:41.672313 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb58470b-5f8c-48af-9e30-2fc4aec8545e-kube-api-access-k29mq" (OuterVolumeSpecName: "kube-api-access-k29mq") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "kube-api-access-k29mq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:02:41.672625 kubelet[1422]: I0906 00:02:41.672586 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb58470b-5f8c-48af-9e30-2fc4aec8545e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:02:41.674438 kubelet[1422]: I0906 00:02:41.674397 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb58470b-5f8c-48af-9e30-2fc4aec8545e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "eb58470b-5f8c-48af-9e30-2fc4aec8545e" (UID: "eb58470b-5f8c-48af-9e30-2fc4aec8545e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:02:41.724037 kubelet[1422]: I0906 00:02:41.723972 1422 scope.go:117] "RemoveContainer" containerID="a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759" Sep 6 00:02:41.730493 systemd[1]: Removed slice kubepods-burstable-podeb58470b_5f8c_48af_9e30_2fc4aec8545e.slice. Sep 6 00:02:41.730573 systemd[1]: kubepods-burstable-podeb58470b_5f8c_48af_9e30_2fc4aec8545e.slice: Consumed 6.508s CPU time. Sep 6 00:02:41.732333 env[1216]: time="2025-09-06T00:02:41.732298058Z" level=info msg="RemoveContainer for \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\"" Sep 6 00:02:41.738613 env[1216]: time="2025-09-06T00:02:41.738568328Z" level=info msg="RemoveContainer for \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\" returns successfully" Sep 6 00:02:41.740277 kubelet[1422]: I0906 00:02:41.739754 1422 scope.go:117] "RemoveContainer" containerID="64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a" Sep 6 00:02:41.740985 env[1216]: time="2025-09-06T00:02:41.740728857Z" level=info msg="RemoveContainer for \"64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a\"" Sep 6 00:02:41.743802 env[1216]: time="2025-09-06T00:02:41.743760574Z" level=info msg="RemoveContainer for \"64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a\" returns successfully" Sep 6 00:02:41.744026 kubelet[1422]: I0906 00:02:41.743931 1422 scope.go:117] "RemoveContainer" containerID="f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a" Sep 6 00:02:41.745005 env[1216]: time="2025-09-06T00:02:41.744973382Z" level=info msg="RemoveContainer for \"f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a\"" Sep 6 00:02:41.749989 env[1216]: time="2025-09-06T00:02:41.748599200Z" level=info msg="RemoveContainer for \"f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a\" returns successfully" Sep 6 00:02:41.749989 env[1216]: time="2025-09-06T00:02:41.749651140Z" level=info msg="RemoveContainer for \"094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2\"" Sep 6 00:02:41.750195 kubelet[1422]: I0906 00:02:41.748748 1422 scope.go:117] "RemoveContainer" containerID="094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2" Sep 6 00:02:41.752124 env[1216]: time="2025-09-06T00:02:41.752072753Z" level=info msg="RemoveContainer for \"094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2\" returns successfully" Sep 6 00:02:41.752251 kubelet[1422]: I0906 00:02:41.752226 1422 scope.go:117] "RemoveContainer" containerID="ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7" Sep 6 00:02:41.753261 env[1216]: time="2025-09-06T00:02:41.753230351Z" level=info msg="RemoveContainer for \"ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7\"" Sep 6 00:02:41.756806 env[1216]: time="2025-09-06T00:02:41.756760954Z" level=info msg="RemoveContainer for \"ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7\" returns successfully" Sep 6 00:02:41.756944 kubelet[1422]: I0906 00:02:41.756927 1422 scope.go:117] "RemoveContainer" containerID="a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759" Sep 6 00:02:41.757221 env[1216]: time="2025-09-06T00:02:41.757104852Z" level=error msg="ContainerStatus for \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\": not found" Sep 6 00:02:41.757354 kubelet[1422]: E0906 00:02:41.757311 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\": not found" containerID="a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759" Sep 6 00:02:41.757440 kubelet[1422]: I0906 00:02:41.757365 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759"} err="failed to get container status \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\": rpc error: code = NotFound desc = an error occurred when try to find container \"a58fdec4a728d215572055702fdf2c5263234663af62516ca80730c02c12b759\": not found" Sep 6 00:02:41.757465 kubelet[1422]: I0906 00:02:41.757444 1422 scope.go:117] "RemoveContainer" containerID="64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a" Sep 6 00:02:41.757646 env[1216]: time="2025-09-06T00:02:41.757608338Z" level=error msg="ContainerStatus for \"64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a\": not found" Sep 6 00:02:41.757739 kubelet[1422]: E0906 00:02:41.757720 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a\": not found" containerID="64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a" Sep 6 00:02:41.757767 kubelet[1422]: I0906 00:02:41.757745 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a"} err="failed to get container status \"64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a\": rpc error: code = NotFound desc = an error occurred when try to find container \"64673d9b6ca499fdd7f636747173589ec2d114ea15d88fe491876f483fc2474a\": not found" Sep 6 00:02:41.757767 kubelet[1422]: I0906 00:02:41.757762 1422 scope.go:117] "RemoveContainer" containerID="f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a" Sep 6 00:02:41.757922 env[1216]: time="2025-09-06T00:02:41.757885666Z" level=error msg="ContainerStatus for \"f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a\": not found" Sep 6 00:02:41.758007 kubelet[1422]: E0906 00:02:41.757990 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a\": not found" containerID="f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a" Sep 6 00:02:41.758034 kubelet[1422]: I0906 00:02:41.758011 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a"} err="failed to get container status \"f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4f25ba42e576f84ece5c5b0c3af965d318f87062692f9357f135a3339c2828a\": not found" Sep 6 00:02:41.758034 kubelet[1422]: I0906 00:02:41.758026 1422 scope.go:117] "RemoveContainer" containerID="094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2" Sep 6 00:02:41.758243 env[1216]: time="2025-09-06T00:02:41.758140469Z" level=error msg="ContainerStatus for \"094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2\": not found" Sep 6 00:02:41.758285 kubelet[1422]: E0906 00:02:41.758249 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2\": not found" containerID="094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2" Sep 6 00:02:41.758317 kubelet[1422]: I0906 00:02:41.758275 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2"} err="failed to get container status \"094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"094440791cd863fe42470cc74bafe74ef7a9ebb9c6c8cbdd67fc8830775561b2\": not found" Sep 6 00:02:41.758346 kubelet[1422]: I0906 00:02:41.758317 1422 scope.go:117] "RemoveContainer" containerID="ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7" Sep 6 00:02:41.758493 env[1216]: time="2025-09-06T00:02:41.758445361Z" level=error msg="ContainerStatus for \"ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7\": not found" Sep 6 00:02:41.758552 kubelet[1422]: E0906 00:02:41.758537 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7\": not found" containerID="ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7" Sep 6 00:02:41.758582 kubelet[1422]: I0906 00:02:41.758558 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7"} err="failed to get container status \"ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea8038b20df26049ded728883943a5e4de653b28b20639d8179f163c86f226a7\": not found" Sep 6 00:02:41.768880 kubelet[1422]: I0906 00:02:41.768845 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cilium-config-path\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:41.768880 kubelet[1422]: I0906 00:02:41.768873 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-cilium-run\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:41.768880 kubelet[1422]: I0906 00:02:41.768882 1422 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-lib-modules\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:41.769006 kubelet[1422]: I0906 00:02:41.768892 1422 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k29mq\" (UniqueName: \"kubernetes.io/projected/eb58470b-5f8c-48af-9e30-2fc4aec8545e-kube-api-access-k29mq\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:41.769006 kubelet[1422]: I0906 00:02:41.768901 1422 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb58470b-5f8c-48af-9e30-2fc4aec8545e-hubble-tls\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:41.769006 kubelet[1422]: I0906 00:02:41.768909 1422 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb58470b-5f8c-48af-9e30-2fc4aec8545e-bpf-maps\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:41.769006 kubelet[1422]: I0906 00:02:41.768917 1422 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb58470b-5f8c-48af-9e30-2fc4aec8545e-clustermesh-secrets\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:42.377627 systemd[1]: var-lib-kubelet-pods-eb58470b\x2d5f8c\x2d48af\x2d9e30\x2d2fc4aec8545e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:02:42.377725 systemd[1]: var-lib-kubelet-pods-eb58470b\x2d5f8c\x2d48af\x2d9e30\x2d2fc4aec8545e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:02:42.410316 kubelet[1422]: E0906 00:02:42.410274 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:43.411721 kubelet[1422]: E0906 00:02:43.411674 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:43.589900 kubelet[1422]: I0906 00:02:43.589867 1422 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb58470b-5f8c-48af-9e30-2fc4aec8545e" path="/var/lib/kubelet/pods/eb58470b-5f8c-48af-9e30-2fc4aec8545e/volumes" Sep 6 00:02:44.412523 kubelet[1422]: E0906 00:02:44.412485 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:44.548288 kubelet[1422]: E0906 00:02:44.548202 1422 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:02:44.597759 kubelet[1422]: I0906 00:02:44.597722 1422 memory_manager.go:355] "RemoveStaleState removing state" podUID="eb58470b-5f8c-48af-9e30-2fc4aec8545e" containerName="cilium-agent" Sep 6 00:02:44.607423 kubelet[1422]: W0906 00:02:44.607396 1422 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.0.0.47" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.47' and this object Sep 6 00:02:44.607614 kubelet[1422]: E0906 00:02:44.607593 1422 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:10.0.0.47\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.47' and this object" logger="UnhandledError" Sep 6 00:02:44.607829 kubelet[1422]: I0906 00:02:44.607606 1422 status_manager.go:890] "Failed to get status for pod" podUID="e5dbdfaa-8224-416b-82d4-b49a0efe21f3" pod="kube-system/cilium-operator-6c4d7847fc-vswlt" err="pods \"cilium-operator-6c4d7847fc-vswlt\" is forbidden: User \"system:node:10.0.0.47\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.47' and this object" Sep 6 00:02:44.611121 kubelet[1422]: W0906 00:02:44.607651 1422 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.47" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.47' and this object Sep 6 00:02:44.611121 kubelet[1422]: E0906 00:02:44.608002 1422 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:10.0.0.47\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.47' and this object" logger="UnhandledError" Sep 6 00:02:44.611121 kubelet[1422]: W0906 00:02:44.607885 1422 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.0.0.47" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.47' and this object Sep 6 00:02:44.611121 kubelet[1422]: E0906 00:02:44.608024 1422 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:10.0.0.47\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.47' and this object" logger="UnhandledError" Sep 6 00:02:44.611121 kubelet[1422]: W0906 00:02:44.609704 1422 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.0.0.47" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.47' and this object Sep 6 00:02:44.611318 kubelet[1422]: E0906 00:02:44.609733 1422 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:10.0.0.47\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.47' and this object" logger="UnhandledError" Sep 6 00:02:44.612991 kubelet[1422]: I0906 00:02:44.612863 1422 status_manager.go:890] "Failed to get status for pod" podUID="323bda53-993e-43a7-a8fc-3966ca3eb674" pod="kube-system/cilium-fnltc" err="pods \"cilium-fnltc\" is forbidden: User \"system:node:10.0.0.47\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.47' and this object" Sep 6 00:02:44.614504 systemd[1]: Created slice kubepods-besteffort-pode5dbdfaa_8224_416b_82d4_b49a0efe21f3.slice. Sep 6 00:02:44.623299 systemd[1]: Created slice kubepods-burstable-pod323bda53_993e_43a7_a8fc_3966ca3eb674.slice. Sep 6 00:02:44.684983 kubelet[1422]: I0906 00:02:44.684871 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnnx7\" (UniqueName: \"kubernetes.io/projected/e5dbdfaa-8224-416b-82d4-b49a0efe21f3-kube-api-access-fnnx7\") pod \"cilium-operator-6c4d7847fc-vswlt\" (UID: \"e5dbdfaa-8224-416b-82d4-b49a0efe21f3\") " pod="kube-system/cilium-operator-6c4d7847fc-vswlt" Sep 6 00:02:44.684983 kubelet[1422]: I0906 00:02:44.684917 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-bpf-maps\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.684983 kubelet[1422]: I0906 00:02:44.684941 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-etc-cni-netd\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.684983 kubelet[1422]: I0906 00:02:44.684956 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/323bda53-993e-43a7-a8fc-3966ca3eb674-hubble-tls\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.685890 kubelet[1422]: I0906 00:02:44.684971 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5dbdfaa-8224-416b-82d4-b49a0efe21f3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vswlt\" (UID: \"e5dbdfaa-8224-416b-82d4-b49a0efe21f3\") " pod="kube-system/cilium-operator-6c4d7847fc-vswlt" Sep 6 00:02:44.685999 kubelet[1422]: I0906 00:02:44.685984 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-run\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.686086 kubelet[1422]: I0906 00:02:44.686071 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-lib-modules\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.686153 kubelet[1422]: I0906 00:02:44.686139 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/323bda53-993e-43a7-a8fc-3966ca3eb674-clustermesh-secrets\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.686228 kubelet[1422]: I0906 00:02:44.686215 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-ipsec-secrets\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.686323 kubelet[1422]: I0906 00:02:44.686311 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-host-proc-sys-net\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.686403 kubelet[1422]: I0906 00:02:44.686390 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-xtables-lock\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.686481 kubelet[1422]: I0906 00:02:44.686468 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-config-path\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.686552 kubelet[1422]: I0906 00:02:44.686540 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-host-proc-sys-kernel\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.686630 kubelet[1422]: I0906 00:02:44.686617 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7q8d\" (UniqueName: \"kubernetes.io/projected/323bda53-993e-43a7-a8fc-3966ca3eb674-kube-api-access-w7q8d\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.686704 kubelet[1422]: I0906 00:02:44.686689 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-hostproc\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.686772 kubelet[1422]: I0906 00:02:44.686760 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-cgroup\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.686856 kubelet[1422]: I0906 00:02:44.686831 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-cni-path\") pod \"cilium-fnltc\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " pod="kube-system/cilium-fnltc" Sep 6 00:02:44.774477 kubelet[1422]: E0906 00:02:44.774428 1422 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-w7q8d lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-fnltc" podUID="323bda53-993e-43a7-a8fc-3966ca3eb674" Sep 6 00:02:45.413539 kubelet[1422]: E0906 00:02:45.413491 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:45.789172 kubelet[1422]: E0906 00:02:45.789051 1422 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 6 00:02:45.789172 kubelet[1422]: E0906 00:02:45.789094 1422 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-fnltc: failed to sync secret cache: timed out waiting for the condition Sep 6 00:02:45.789172 kubelet[1422]: E0906 00:02:45.789167 1422 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/323bda53-993e-43a7-a8fc-3966ca3eb674-hubble-tls podName:323bda53-993e-43a7-a8fc-3966ca3eb674 nodeName:}" failed. No retries permitted until 2025-09-06 00:02:46.289145341 +0000 UTC m=+58.358403832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/323bda53-993e-43a7-a8fc-3966ca3eb674-hubble-tls") pod "cilium-fnltc" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674") : failed to sync secret cache: timed out waiting for the condition Sep 6 00:02:45.789494 kubelet[1422]: E0906 00:02:45.789458 1422 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 6 00:02:45.789531 kubelet[1422]: E0906 00:02:45.789524 1422 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/323bda53-993e-43a7-a8fc-3966ca3eb674-clustermesh-secrets podName:323bda53-993e-43a7-a8fc-3966ca3eb674 nodeName:}" failed. No retries permitted until 2025-09-06 00:02:46.289511075 +0000 UTC m=+58.358769566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/323bda53-993e-43a7-a8fc-3966ca3eb674-clustermesh-secrets") pod "cilium-fnltc" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674") : failed to sync secret cache: timed out waiting for the condition Sep 6 00:02:45.794740 kubelet[1422]: I0906 00:02:45.794706 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-etc-cni-netd\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:45.794872 kubelet[1422]: I0906 00:02:45.794747 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-config-path\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:45.794872 kubelet[1422]: I0906 00:02:45.794767 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-hostproc\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:45.794872 kubelet[1422]: I0906 00:02:45.794780 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-run\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:45.794872 kubelet[1422]: I0906 00:02:45.794794 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-bpf-maps\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:45.794872 kubelet[1422]: I0906 00:02:45.794811 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-cni-path\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:45.794872 kubelet[1422]: I0906 00:02:45.794825 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-host-proc-sys-net\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:45.795086 kubelet[1422]: I0906 00:02:45.794851 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-xtables-lock\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:45.795086 kubelet[1422]: I0906 00:02:45.794870 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-lib-modules\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:45.795086 kubelet[1422]: I0906 00:02:45.794884 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-host-proc-sys-kernel\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:45.795086 kubelet[1422]: I0906 00:02:45.794903 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7q8d\" (UniqueName: \"kubernetes.io/projected/323bda53-993e-43a7-a8fc-3966ca3eb674-kube-api-access-w7q8d\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:45.795086 kubelet[1422]: I0906 00:02:45.794920 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-ipsec-secrets\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:45.795086 kubelet[1422]: I0906 00:02:45.794934 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-cgroup\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:45.795210 kubelet[1422]: I0906 00:02:45.795037 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:45.795210 kubelet[1422]: I0906 00:02:45.795058 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:45.795329 kubelet[1422]: I0906 00:02:45.795295 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:45.795405 kubelet[1422]: I0906 00:02:45.795392 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-hostproc" (OuterVolumeSpecName: "hostproc") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:45.795476 kubelet[1422]: I0906 00:02:45.795464 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:45.795549 kubelet[1422]: I0906 00:02:45.795537 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:45.795615 kubelet[1422]: I0906 00:02:45.795603 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-cni-path" (OuterVolumeSpecName: "cni-path") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:45.795687 kubelet[1422]: I0906 00:02:45.795673 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:45.795764 kubelet[1422]: I0906 00:02:45.795751 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:45.795830 kubelet[1422]: I0906 00:02:45.795818 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:02:45.799787 kubelet[1422]: I0906 00:02:45.796762 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:02:45.799787 kubelet[1422]: I0906 00:02:45.798559 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:02:45.799594 systemd[1]: var-lib-kubelet-pods-323bda53\x2d993e\x2d43a7\x2da8fc\x2d3966ca3eb674-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:02:45.799678 systemd[1]: var-lib-kubelet-pods-323bda53\x2d993e\x2d43a7\x2da8fc\x2d3966ca3eb674-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw7q8d.mount: Deactivated successfully. Sep 6 00:02:45.800307 kubelet[1422]: I0906 00:02:45.800267 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323bda53-993e-43a7-a8fc-3966ca3eb674-kube-api-access-w7q8d" (OuterVolumeSpecName: "kube-api-access-w7q8d") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "kube-api-access-w7q8d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:02:45.820983 kubelet[1422]: E0906 00:02:45.820959 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:45.822101 env[1216]: time="2025-09-06T00:02:45.821703681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vswlt,Uid:e5dbdfaa-8224-416b-82d4-b49a0efe21f3,Namespace:kube-system,Attempt:0,}" Sep 6 00:02:45.837064 env[1216]: time="2025-09-06T00:02:45.836963900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:02:45.837064 env[1216]: time="2025-09-06T00:02:45.837012187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:02:45.837064 env[1216]: time="2025-09-06T00:02:45.837023669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:02:45.837451 env[1216]: time="2025-09-06T00:02:45.837298670Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b7c1a6f4311a3dde86d74d920861b9633a2e61af3ae0a3af7ccf71e2c71904e pid=2985 runtime=io.containerd.runc.v2 Sep 6 00:02:45.856265 systemd[1]: Started cri-containerd-7b7c1a6f4311a3dde86d74d920861b9633a2e61af3ae0a3af7ccf71e2c71904e.scope. Sep 6 00:02:45.895081 env[1216]: time="2025-09-06T00:02:45.894596472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vswlt,Uid:e5dbdfaa-8224-416b-82d4-b49a0efe21f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b7c1a6f4311a3dde86d74d920861b9633a2e61af3ae0a3af7ccf71e2c71904e\"" Sep 6 00:02:45.895513 kubelet[1422]: I0906 00:02:45.895381 1422 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-cni-path\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:45.895513 kubelet[1422]: I0906 00:02:45.895402 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-run\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:45.895513 kubelet[1422]: I0906 00:02:45.895411 1422 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-bpf-maps\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:45.895513 kubelet[1422]: I0906 00:02:45.895420 1422 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-host-proc-sys-net\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:45.895513 kubelet[1422]: I0906 00:02:45.895430 1422 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-xtables-lock\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:45.895513 kubelet[1422]: I0906 00:02:45.895438 1422 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w7q8d\" (UniqueName: \"kubernetes.io/projected/323bda53-993e-43a7-a8fc-3966ca3eb674-kube-api-access-w7q8d\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:45.895513 kubelet[1422]: I0906 00:02:45.895446 1422 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-lib-modules\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:45.895513 kubelet[1422]: I0906 00:02:45.895454 1422 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-host-proc-sys-kernel\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:45.895764 kubelet[1422]: I0906 00:02:45.895462 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-ipsec-secrets\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:45.895764 kubelet[1422]: I0906 00:02:45.895469 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-cgroup\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:45.895764 kubelet[1422]: I0906 00:02:45.895477 1422 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-hostproc\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:45.895764 kubelet[1422]: I0906 00:02:45.895485 1422 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/323bda53-993e-43a7-a8fc-3966ca3eb674-etc-cni-netd\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:45.895764 kubelet[1422]: I0906 00:02:45.895493 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/323bda53-993e-43a7-a8fc-3966ca3eb674-cilium-config-path\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:45.895764 kubelet[1422]: E0906 00:02:45.895727 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:45.897423 env[1216]: time="2025-09-06T00:02:45.897391406Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:02:46.398158 kubelet[1422]: I0906 00:02:46.398042 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/323bda53-993e-43a7-a8fc-3966ca3eb674-clustermesh-secrets\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:46.398158 kubelet[1422]: I0906 00:02:46.398098 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/323bda53-993e-43a7-a8fc-3966ca3eb674-hubble-tls\") pod \"323bda53-993e-43a7-a8fc-3966ca3eb674\" (UID: \"323bda53-993e-43a7-a8fc-3966ca3eb674\") " Sep 6 00:02:46.401075 kubelet[1422]: I0906 00:02:46.401030 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323bda53-993e-43a7-a8fc-3966ca3eb674-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:02:46.401275 kubelet[1422]: I0906 00:02:46.401251 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323bda53-993e-43a7-a8fc-3966ca3eb674-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "323bda53-993e-43a7-a8fc-3966ca3eb674" (UID: "323bda53-993e-43a7-a8fc-3966ca3eb674"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:02:46.414407 kubelet[1422]: E0906 00:02:46.414371 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:46.498824 kubelet[1422]: I0906 00:02:46.498788 1422 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/323bda53-993e-43a7-a8fc-3966ca3eb674-clustermesh-secrets\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:46.499037 kubelet[1422]: I0906 00:02:46.499025 1422 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/323bda53-993e-43a7-a8fc-3966ca3eb674-hubble-tls\") on node \"10.0.0.47\" DevicePath \"\"" Sep 6 00:02:46.740820 systemd[1]: Removed slice kubepods-burstable-pod323bda53_993e_43a7_a8fc_3966ca3eb674.slice. Sep 6 00:02:46.783666 systemd[1]: Created slice kubepods-burstable-poda1ba6048_e9c7_4582_afc3_4787fccd6855.slice. Sep 6 00:02:46.799717 systemd[1]: run-containerd-runc-k8s.io-7b7c1a6f4311a3dde86d74d920861b9633a2e61af3ae0a3af7ccf71e2c71904e-runc.BvnER9.mount: Deactivated successfully. Sep 6 00:02:46.902775 kubelet[1422]: I0906 00:02:46.902726 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1ba6048-e9c7-4582-afc3-4787fccd6855-lib-modules\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.902775 kubelet[1422]: I0906 00:02:46.902773 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1ba6048-e9c7-4582-afc3-4787fccd6855-cilium-config-path\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.902973 kubelet[1422]: I0906 00:02:46.902793 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1ba6048-e9c7-4582-afc3-4787fccd6855-host-proc-sys-kernel\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.902973 kubelet[1422]: I0906 00:02:46.902809 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1ba6048-e9c7-4582-afc3-4787fccd6855-hostproc\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.902973 kubelet[1422]: I0906 00:02:46.902826 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1ba6048-e9c7-4582-afc3-4787fccd6855-etc-cni-netd\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.902973 kubelet[1422]: I0906 00:02:46.902856 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1ba6048-e9c7-4582-afc3-4787fccd6855-clustermesh-secrets\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.902973 kubelet[1422]: I0906 00:02:46.902872 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1ba6048-e9c7-4582-afc3-4787fccd6855-cilium-ipsec-secrets\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.902973 kubelet[1422]: I0906 00:02:46.902897 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1ba6048-e9c7-4582-afc3-4787fccd6855-cilium-run\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.903115 kubelet[1422]: I0906 00:02:46.902916 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1ba6048-e9c7-4582-afc3-4787fccd6855-cilium-cgroup\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.903115 kubelet[1422]: I0906 00:02:46.902941 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1ba6048-e9c7-4582-afc3-4787fccd6855-cni-path\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.903115 kubelet[1422]: I0906 00:02:46.902956 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1ba6048-e9c7-4582-afc3-4787fccd6855-hubble-tls\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.903115 kubelet[1422]: I0906 00:02:46.902972 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zjhc\" (UniqueName: \"kubernetes.io/projected/a1ba6048-e9c7-4582-afc3-4787fccd6855-kube-api-access-5zjhc\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.903115 kubelet[1422]: I0906 00:02:46.902989 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1ba6048-e9c7-4582-afc3-4787fccd6855-bpf-maps\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.903115 kubelet[1422]: I0906 00:02:46.903003 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1ba6048-e9c7-4582-afc3-4787fccd6855-xtables-lock\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:46.903241 kubelet[1422]: I0906 00:02:46.903019 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1ba6048-e9c7-4582-afc3-4787fccd6855-host-proc-sys-net\") pod \"cilium-hgpnv\" (UID: \"a1ba6048-e9c7-4582-afc3-4787fccd6855\") " pod="kube-system/cilium-hgpnv" Sep 6 00:02:47.099980 kubelet[1422]: E0906 00:02:47.099946 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:47.101178 env[1216]: time="2025-09-06T00:02:47.100764631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hgpnv,Uid:a1ba6048-e9c7-4582-afc3-4787fccd6855,Namespace:kube-system,Attempt:0,}" Sep 6 00:02:47.125280 env[1216]: time="2025-09-06T00:02:47.125200141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:02:47.125280 env[1216]: time="2025-09-06T00:02:47.125245948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:02:47.125718 env[1216]: time="2025-09-06T00:02:47.125255509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:02:47.125718 env[1216]: time="2025-09-06T00:02:47.125386647Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9cd76a92c1407ed8a33f092891586ad0c798156f892f9edb1f35a99cc97556e pid=3032 runtime=io.containerd.runc.v2 Sep 6 00:02:47.145823 systemd[1]: Started cri-containerd-c9cd76a92c1407ed8a33f092891586ad0c798156f892f9edb1f35a99cc97556e.scope. Sep 6 00:02:47.189448 env[1216]: time="2025-09-06T00:02:47.189402408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hgpnv,Uid:a1ba6048-e9c7-4582-afc3-4787fccd6855,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9cd76a92c1407ed8a33f092891586ad0c798156f892f9edb1f35a99cc97556e\"" Sep 6 00:02:47.190399 kubelet[1422]: E0906 00:02:47.190377 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:47.192593 env[1216]: time="2025-09-06T00:02:47.192549925Z" level=info msg="CreateContainer within sandbox \"c9cd76a92c1407ed8a33f092891586ad0c798156f892f9edb1f35a99cc97556e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:02:47.203495 env[1216]: time="2025-09-06T00:02:47.203453237Z" level=info msg="CreateContainer within sandbox \"c9cd76a92c1407ed8a33f092891586ad0c798156f892f9edb1f35a99cc97556e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"26f07a6ae1fc064fbb651ab7f33fbc34f594ffde37b7218069d26085ad0a164e\"" Sep 6 00:02:47.204712 env[1216]: time="2025-09-06T00:02:47.204687728Z" level=info msg="StartContainer for \"26f07a6ae1fc064fbb651ab7f33fbc34f594ffde37b7218069d26085ad0a164e\"" Sep 6 00:02:47.225169 systemd[1]: Started cri-containerd-26f07a6ae1fc064fbb651ab7f33fbc34f594ffde37b7218069d26085ad0a164e.scope. Sep 6 00:02:47.264518 systemd[1]: cri-containerd-26f07a6ae1fc064fbb651ab7f33fbc34f594ffde37b7218069d26085ad0a164e.scope: Deactivated successfully. Sep 6 00:02:47.266495 env[1216]: time="2025-09-06T00:02:47.266446496Z" level=info msg="StartContainer for \"26f07a6ae1fc064fbb651ab7f33fbc34f594ffde37b7218069d26085ad0a164e\" returns successfully" Sep 6 00:02:47.297730 env[1216]: time="2025-09-06T00:02:47.297673268Z" level=info msg="shim disconnected" id=26f07a6ae1fc064fbb651ab7f33fbc34f594ffde37b7218069d26085ad0a164e Sep 6 00:02:47.297730 env[1216]: time="2025-09-06T00:02:47.297717834Z" level=warning msg="cleaning up after shim disconnected" id=26f07a6ae1fc064fbb651ab7f33fbc34f594ffde37b7218069d26085ad0a164e namespace=k8s.io Sep 6 00:02:47.297730 env[1216]: time="2025-09-06T00:02:47.297727236Z" level=info msg="cleaning up dead shim" Sep 6 00:02:47.304532 env[1216]: time="2025-09-06T00:02:47.304489054Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:02:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3116 runtime=io.containerd.runc.v2\n" Sep 6 00:02:47.415449 kubelet[1422]: E0906 00:02:47.415318 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:47.590777 kubelet[1422]: I0906 00:02:47.590586 1422 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="323bda53-993e-43a7-a8fc-3966ca3eb674" path="/var/lib/kubelet/pods/323bda53-993e-43a7-a8fc-3966ca3eb674/volumes" Sep 6 00:02:47.687374 env[1216]: time="2025-09-06T00:02:47.687265996Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:47.688782 env[1216]: time="2025-09-06T00:02:47.688750922Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:47.690028 env[1216]: time="2025-09-06T00:02:47.689986693Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:02:47.690633 env[1216]: time="2025-09-06T00:02:47.690602219Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 6 00:02:47.692743 env[1216]: time="2025-09-06T00:02:47.692695229Z" level=info msg="CreateContainer within sandbox \"7b7c1a6f4311a3dde86d74d920861b9633a2e61af3ae0a3af7ccf71e2c71904e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:02:47.701860 env[1216]: time="2025-09-06T00:02:47.701800572Z" level=info msg="CreateContainer within sandbox \"7b7c1a6f4311a3dde86d74d920861b9633a2e61af3ae0a3af7ccf71e2c71904e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b58a284095e38526b4d77f4ec8f179a0c5a4aaf4e35bbdbb9fc809940c220065\"" Sep 6 00:02:47.702410 env[1216]: time="2025-09-06T00:02:47.702384413Z" level=info msg="StartContainer for \"b58a284095e38526b4d77f4ec8f179a0c5a4aaf4e35bbdbb9fc809940c220065\"" Sep 6 00:02:47.716177 systemd[1]: Started cri-containerd-b58a284095e38526b4d77f4ec8f179a0c5a4aaf4e35bbdbb9fc809940c220065.scope. Sep 6 00:02:47.740649 kubelet[1422]: E0906 00:02:47.740618 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:47.744630 env[1216]: time="2025-09-06T00:02:47.743727669Z" level=info msg="CreateContainer within sandbox \"c9cd76a92c1407ed8a33f092891586ad0c798156f892f9edb1f35a99cc97556e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:02:47.747565 env[1216]: time="2025-09-06T00:02:47.747515474Z" level=info msg="StartContainer for \"b58a284095e38526b4d77f4ec8f179a0c5a4aaf4e35bbdbb9fc809940c220065\" returns successfully" Sep 6 00:02:47.757198 env[1216]: time="2025-09-06T00:02:47.757134209Z" level=info msg="CreateContainer within sandbox \"c9cd76a92c1407ed8a33f092891586ad0c798156f892f9edb1f35a99cc97556e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e1bc9f2fb26bc827e2ec03631c483040e581d04f48a4ec729c8ddb870c69c581\"" Sep 6 00:02:47.757785 env[1216]: time="2025-09-06T00:02:47.757744133Z" level=info msg="StartContainer for \"e1bc9f2fb26bc827e2ec03631c483040e581d04f48a4ec729c8ddb870c69c581\"" Sep 6 00:02:47.775933 systemd[1]: Started cri-containerd-e1bc9f2fb26bc827e2ec03631c483040e581d04f48a4ec729c8ddb870c69c581.scope. Sep 6 00:02:47.871930 env[1216]: time="2025-09-06T00:02:47.871870846Z" level=info msg="StartContainer for \"e1bc9f2fb26bc827e2ec03631c483040e581d04f48a4ec729c8ddb870c69c581\" returns successfully" Sep 6 00:02:47.880586 systemd[1]: cri-containerd-e1bc9f2fb26bc827e2ec03631c483040e581d04f48a4ec729c8ddb870c69c581.scope: Deactivated successfully. Sep 6 00:02:47.896688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1bc9f2fb26bc827e2ec03631c483040e581d04f48a4ec729c8ddb870c69c581-rootfs.mount: Deactivated successfully. Sep 6 00:02:47.911896 env[1216]: time="2025-09-06T00:02:47.906891824Z" level=info msg="shim disconnected" id=e1bc9f2fb26bc827e2ec03631c483040e581d04f48a4ec729c8ddb870c69c581 Sep 6 00:02:47.911896 env[1216]: time="2025-09-06T00:02:47.906940391Z" level=warning msg="cleaning up after shim disconnected" id=e1bc9f2fb26bc827e2ec03631c483040e581d04f48a4ec729c8ddb870c69c581 namespace=k8s.io Sep 6 00:02:47.911896 env[1216]: time="2025-09-06T00:02:47.906956433Z" level=info msg="cleaning up dead shim" Sep 6 00:02:47.919957 env[1216]: time="2025-09-06T00:02:47.919912031Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:02:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3215 runtime=io.containerd.runc.v2\n" Sep 6 00:02:48.415900 kubelet[1422]: E0906 00:02:48.415852 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:48.743931 kubelet[1422]: E0906 00:02:48.743704 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:48.744781 kubelet[1422]: E0906 00:02:48.744723 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:48.746535 env[1216]: time="2025-09-06T00:02:48.746494283Z" level=info msg="CreateContainer within sandbox \"c9cd76a92c1407ed8a33f092891586ad0c798156f892f9edb1f35a99cc97556e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:02:48.773106 env[1216]: time="2025-09-06T00:02:48.772749855Z" level=info msg="CreateContainer within sandbox \"c9cd76a92c1407ed8a33f092891586ad0c798156f892f9edb1f35a99cc97556e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"233ed678eccc2422045904c6a484d568f7ec4a8eab5c4cab85594fe356bac164\"" Sep 6 00:02:48.773452 env[1216]: time="2025-09-06T00:02:48.773422505Z" level=info msg="StartContainer for \"233ed678eccc2422045904c6a484d568f7ec4a8eab5c4cab85594fe356bac164\"" Sep 6 00:02:48.806474 systemd[1]: Started cri-containerd-233ed678eccc2422045904c6a484d568f7ec4a8eab5c4cab85594fe356bac164.scope. Sep 6 00:02:48.807675 kubelet[1422]: I0906 00:02:48.807392 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vswlt" podStartSLOduration=3.013112111 podStartE2EDuration="4.807372632s" podCreationTimestamp="2025-09-06 00:02:44 +0000 UTC" firstStartedPulling="2025-09-06 00:02:45.897142209 +0000 UTC m=+57.966400700" lastFinishedPulling="2025-09-06 00:02:47.69140273 +0000 UTC m=+59.760661221" observedRunningTime="2025-09-06 00:02:48.80728134 +0000 UTC m=+60.876539831" watchObservedRunningTime="2025-09-06 00:02:48.807372632 +0000 UTC m=+60.876631123" Sep 6 00:02:48.845894 systemd[1]: cri-containerd-233ed678eccc2422045904c6a484d568f7ec4a8eab5c4cab85594fe356bac164.scope: Deactivated successfully. Sep 6 00:02:48.851401 env[1216]: time="2025-09-06T00:02:48.851357868Z" level=info msg="StartContainer for \"233ed678eccc2422045904c6a484d568f7ec4a8eab5c4cab85594fe356bac164\" returns successfully" Sep 6 00:02:48.868149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-233ed678eccc2422045904c6a484d568f7ec4a8eab5c4cab85594fe356bac164-rootfs.mount: Deactivated successfully. Sep 6 00:02:48.874147 env[1216]: time="2025-09-06T00:02:48.874099287Z" level=info msg="shim disconnected" id=233ed678eccc2422045904c6a484d568f7ec4a8eab5c4cab85594fe356bac164 Sep 6 00:02:48.874147 env[1216]: time="2025-09-06T00:02:48.874148413Z" level=warning msg="cleaning up after shim disconnected" id=233ed678eccc2422045904c6a484d568f7ec4a8eab5c4cab85594fe356bac164 namespace=k8s.io Sep 6 00:02:48.874380 env[1216]: time="2025-09-06T00:02:48.874170496Z" level=info msg="cleaning up dead shim" Sep 6 00:02:48.882089 env[1216]: time="2025-09-06T00:02:48.882010951Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:02:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3275 runtime=io.containerd.runc.v2\n" Sep 6 00:02:49.372356 kubelet[1422]: E0906 00:02:49.372296 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:49.399828 env[1216]: time="2025-09-06T00:02:49.399789297Z" level=info msg="StopPodSandbox for \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\"" Sep 6 00:02:49.399936 env[1216]: time="2025-09-06T00:02:49.399893471Z" level=info msg="TearDown network for sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" successfully" Sep 6 00:02:49.399936 env[1216]: time="2025-09-06T00:02:49.399927875Z" level=info msg="StopPodSandbox for \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" returns successfully" Sep 6 00:02:49.400383 env[1216]: time="2025-09-06T00:02:49.400358891Z" level=info msg="RemovePodSandbox for \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\"" Sep 6 00:02:49.400448 env[1216]: time="2025-09-06T00:02:49.400387415Z" level=info msg="Forcibly stopping sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\"" Sep 6 00:02:49.400478 env[1216]: time="2025-09-06T00:02:49.400447223Z" level=info msg="TearDown network for sandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" successfully" Sep 6 00:02:49.404977 env[1216]: time="2025-09-06T00:02:49.404941290Z" level=info msg="RemovePodSandbox \"2fee179374c7835c4bb71080dd1a8af77c762fc09f9eeb550902e11385f51a1b\" returns successfully" Sep 6 00:02:49.416533 kubelet[1422]: E0906 00:02:49.416485 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:49.549019 kubelet[1422]: E0906 00:02:49.548986 1422 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:02:49.750202 kubelet[1422]: E0906 00:02:49.750114 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:49.750460 kubelet[1422]: E0906 00:02:49.750207 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:49.753166 env[1216]: time="2025-09-06T00:02:49.753123342Z" level=info msg="CreateContainer within sandbox \"c9cd76a92c1407ed8a33f092891586ad0c798156f892f9edb1f35a99cc97556e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:02:49.774526 env[1216]: time="2025-09-06T00:02:49.774480810Z" level=info msg="CreateContainer within sandbox \"c9cd76a92c1407ed8a33f092891586ad0c798156f892f9edb1f35a99cc97556e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"74dc471d3041272a54c615f3f5f0f8d4605d7215c99d05b2e8830988eeec7b1b\"" Sep 6 00:02:49.775910 env[1216]: time="2025-09-06T00:02:49.775878352Z" level=info msg="StartContainer for \"74dc471d3041272a54c615f3f5f0f8d4605d7215c99d05b2e8830988eeec7b1b\"" Sep 6 00:02:49.799679 systemd[1]: Started cri-containerd-74dc471d3041272a54c615f3f5f0f8d4605d7215c99d05b2e8830988eeec7b1b.scope. Sep 6 00:02:49.840477 systemd[1]: cri-containerd-74dc471d3041272a54c615f3f5f0f8d4605d7215c99d05b2e8830988eeec7b1b.scope: Deactivated successfully. Sep 6 00:02:49.842309 env[1216]: time="2025-09-06T00:02:49.842172567Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1ba6048_e9c7_4582_afc3_4787fccd6855.slice/cri-containerd-74dc471d3041272a54c615f3f5f0f8d4605d7215c99d05b2e8830988eeec7b1b.scope/memory.events\": no such file or directory" Sep 6 00:02:49.843349 env[1216]: time="2025-09-06T00:02:49.843306995Z" level=info msg="StartContainer for \"74dc471d3041272a54c615f3f5f0f8d4605d7215c99d05b2e8830988eeec7b1b\" returns successfully" Sep 6 00:02:49.867053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74dc471d3041272a54c615f3f5f0f8d4605d7215c99d05b2e8830988eeec7b1b-rootfs.mount: Deactivated successfully. Sep 6 00:02:49.878420 env[1216]: time="2025-09-06T00:02:49.878337127Z" level=info msg="shim disconnected" id=74dc471d3041272a54c615f3f5f0f8d4605d7215c99d05b2e8830988eeec7b1b Sep 6 00:02:49.878420 env[1216]: time="2025-09-06T00:02:49.878407297Z" level=warning msg="cleaning up after shim disconnected" id=74dc471d3041272a54c615f3f5f0f8d4605d7215c99d05b2e8830988eeec7b1b namespace=k8s.io Sep 6 00:02:49.878420 env[1216]: time="2025-09-06T00:02:49.878418658Z" level=info msg="cleaning up dead shim" Sep 6 00:02:49.884776 env[1216]: time="2025-09-06T00:02:49.884732202Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:02:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3330 runtime=io.containerd.runc.v2\n" Sep 6 00:02:50.417848 kubelet[1422]: E0906 00:02:50.417778 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:50.755971 kubelet[1422]: E0906 00:02:50.755344 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:50.758601 env[1216]: time="2025-09-06T00:02:50.758463370Z" level=info msg="CreateContainer within sandbox \"c9cd76a92c1407ed8a33f092891586ad0c798156f892f9edb1f35a99cc97556e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:02:50.783374 env[1216]: time="2025-09-06T00:02:50.782250067Z" level=info msg="CreateContainer within sandbox \"c9cd76a92c1407ed8a33f092891586ad0c798156f892f9edb1f35a99cc97556e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4c4526759bff280157c0cdd8ab3ffb5c34aba3854d35bbae9fba03008d0f4932\"" Sep 6 00:02:50.783374 env[1216]: time="2025-09-06T00:02:50.782889108Z" level=info msg="StartContainer for \"4c4526759bff280157c0cdd8ab3ffb5c34aba3854d35bbae9fba03008d0f4932\"" Sep 6 00:02:50.815491 systemd[1]: Started cri-containerd-4c4526759bff280157c0cdd8ab3ffb5c34aba3854d35bbae9fba03008d0f4932.scope. Sep 6 00:02:50.854272 env[1216]: time="2025-09-06T00:02:50.854210434Z" level=info msg="StartContainer for \"4c4526759bff280157c0cdd8ab3ffb5c34aba3854d35bbae9fba03008d0f4932\" returns successfully" Sep 6 00:02:50.871548 systemd[1]: run-containerd-runc-k8s.io-4c4526759bff280157c0cdd8ab3ffb5c34aba3854d35bbae9fba03008d0f4932-runc.yxqryd.mount: Deactivated successfully. Sep 6 00:02:51.035729 kubelet[1422]: I0906 00:02:51.035610 1422 setters.go:602] "Node became not ready" node="10.0.0.47" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:02:51Z","lastTransitionTime":"2025-09-06T00:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:02:51.109860 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 6 00:02:51.418411 kubelet[1422]: E0906 00:02:51.418355 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:51.765740 kubelet[1422]: E0906 00:02:51.765252 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:52.419131 kubelet[1422]: E0906 00:02:52.419046 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:53.078227 systemd[1]: run-containerd-runc-k8s.io-4c4526759bff280157c0cdd8ab3ffb5c34aba3854d35bbae9fba03008d0f4932-runc.hOkcBj.mount: Deactivated successfully. Sep 6 00:02:53.100773 kubelet[1422]: E0906 00:02:53.100682 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:53.422477 kubelet[1422]: E0906 00:02:53.422343 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:54.057124 systemd-networkd[1055]: lxc_health: Link UP Sep 6 00:02:54.081875 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:02:54.082156 systemd-networkd[1055]: lxc_health: Gained carrier Sep 6 00:02:54.422830 kubelet[1422]: E0906 00:02:54.422766 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:55.101571 kubelet[1422]: E0906 00:02:55.101480 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:55.134658 kubelet[1422]: I0906 00:02:55.134575 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hgpnv" podStartSLOduration=9.134550218 podStartE2EDuration="9.134550218s" podCreationTimestamp="2025-09-06 00:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:02:51.799121655 +0000 UTC m=+63.868380146" watchObservedRunningTime="2025-09-06 00:02:55.134550218 +0000 UTC m=+67.203808709" Sep 6 00:02:55.280807 systemd[1]: run-containerd-runc-k8s.io-4c4526759bff280157c0cdd8ab3ffb5c34aba3854d35bbae9fba03008d0f4932-runc.SvZsKs.mount: Deactivated successfully. Sep 6 00:02:55.423274 kubelet[1422]: E0906 00:02:55.423156 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:55.635986 systemd-networkd[1055]: lxc_health: Gained IPv6LL Sep 6 00:02:55.773027 kubelet[1422]: E0906 00:02:55.772924 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:56.423804 kubelet[1422]: E0906 00:02:56.423740 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:56.775629 kubelet[1422]: E0906 00:02:56.775322 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:02:57.424203 kubelet[1422]: E0906 00:02:57.423962 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:58.425083 kubelet[1422]: E0906 00:02:58.425037 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:02:59.426229 kubelet[1422]: E0906 00:02:59.426161 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:03:00.426782 kubelet[1422]: E0906 00:03:00.426734 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"