Sep 13 00:07:21.682769 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:07:21.682788 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 12 23:05:37 -00 2025 Sep 13 00:07:21.682796 kernel: efi: EFI v2.70 by EDK II Sep 13 00:07:21.682802 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 13 00:07:21.682807 kernel: random: crng init done Sep 13 00:07:21.682812 kernel: ACPI: Early table checksum verification disabled Sep 13 00:07:21.682818 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 13 00:07:21.682825 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:07:21.682831 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:07:21.682837 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:07:21.682854 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:07:21.682861 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:07:21.682866 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:07:21.682872 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:07:21.682882 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:07:21.682888 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:07:21.682894 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:07:21.682900 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 13 00:07:21.682905 kernel: NUMA: Failed to initialise from firmware Sep 13 00:07:21.682911 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:07:21.682917 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Sep 13 00:07:21.682923 kernel: Zone ranges: Sep 13 00:07:21.682929 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:07:21.682936 kernel: DMA32 empty Sep 13 00:07:21.682942 kernel: Normal empty Sep 13 00:07:21.682948 kernel: Movable zone start for each node Sep 13 00:07:21.682954 kernel: Early memory node ranges Sep 13 00:07:21.682962 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 13 00:07:21.682968 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 13 00:07:21.682974 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 13 00:07:21.682980 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 13 00:07:21.682986 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 13 00:07:21.682992 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 13 00:07:21.682998 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 13 00:07:21.683005 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:07:21.683014 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 13 00:07:21.683020 kernel: psci: probing for conduit method from ACPI. Sep 13 00:07:21.683025 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:07:21.683031 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:07:21.683037 kernel: psci: Trusted OS migration not required Sep 13 00:07:21.683045 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:07:21.683051 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 00:07:21.683058 kernel: ACPI: SRAT not present Sep 13 00:07:21.683065 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Sep 13 00:07:21.683071 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Sep 13 00:07:21.683077 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 13 00:07:21.683083 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:07:21.683090 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:07:21.683096 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:07:21.683102 kernel: CPU features: detected: Spectre-v4 Sep 13 00:07:21.683108 kernel: CPU features: detected: Spectre-BHB Sep 13 00:07:21.683115 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:07:21.683121 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:07:21.683128 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:07:21.683134 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:07:21.683140 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 13 00:07:21.683146 kernel: Policy zone: DMA Sep 13 00:07:21.683153 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:07:21.683160 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:07:21.683166 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:07:21.683173 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:07:21.683179 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:07:21.683186 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Sep 13 00:07:21.683192 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:07:21.683198 kernel: trace event string verifier disabled Sep 13 00:07:21.683204 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:07:21.683211 kernel: rcu: RCU event tracing is enabled. Sep 13 00:07:21.683217 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:07:21.683223 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:07:21.683230 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:07:21.683236 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:07:21.683242 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:07:21.683248 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:07:21.683256 kernel: GICv3: 256 SPIs implemented Sep 13 00:07:21.683262 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:07:21.683269 kernel: GICv3: Distributor has no Range Selector support Sep 13 00:07:21.683275 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:07:21.683281 kernel: GICv3: 16 PPIs implemented Sep 13 00:07:21.683287 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 00:07:21.683293 kernel: ACPI: SRAT not present Sep 13 00:07:21.683299 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 00:07:21.683306 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:07:21.683312 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:07:21.683319 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 13 00:07:21.683330 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 13 00:07:21.683339 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:07:21.683345 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:07:21.683352 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:07:21.683359 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:07:21.683365 kernel: arm-pv: using stolen time PV Sep 13 00:07:21.683372 kernel: Console: colour dummy device 80x25 Sep 13 00:07:21.683379 kernel: ACPI: Core revision 20210730 Sep 13 00:07:21.683386 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:07:21.683392 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:07:21.683399 kernel: LSM: Security Framework initializing Sep 13 00:07:21.683407 kernel: SELinux: Initializing. Sep 13 00:07:21.683413 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:07:21.683420 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:07:21.683426 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:07:21.683433 kernel: Platform MSI: ITS@0x8080000 domain created Sep 13 00:07:21.683439 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 13 00:07:21.683446 kernel: Remapping and enabling EFI services. Sep 13 00:07:21.683461 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:07:21.683468 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:07:21.683476 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 00:07:21.683482 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 13 00:07:21.683488 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:07:21.683494 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:07:21.683501 kernel: Detected PIPT I-cache on CPU2 Sep 13 00:07:21.683507 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 13 00:07:21.683515 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 13 00:07:21.683521 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:07:21.683528 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 13 00:07:21.683534 kernel: Detected PIPT I-cache on CPU3 Sep 13 00:07:21.683541 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 13 00:07:21.683548 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 13 00:07:21.683554 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:07:21.683560 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 13 00:07:21.683572 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:07:21.683580 kernel: SMP: Total of 4 processors activated. Sep 13 00:07:21.683586 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:07:21.683593 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:07:21.683599 kernel: CPU features: detected: Common not Private translations Sep 13 00:07:21.683606 kernel: CPU features: detected: CRC32 instructions Sep 13 00:07:21.683612 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:07:21.683619 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:07:21.683627 kernel: CPU features: detected: Privileged Access Never Sep 13 00:07:21.683634 kernel: CPU features: detected: RAS Extension Support Sep 13 00:07:21.683640 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 00:07:21.683647 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:07:21.683653 kernel: alternatives: patching kernel code Sep 13 00:07:21.683667 kernel: devtmpfs: initialized Sep 13 00:07:21.683674 kernel: KASLR enabled Sep 13 00:07:21.683680 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:07:21.683687 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:07:21.683694 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:07:21.683700 kernel: SMBIOS 3.0.0 present. Sep 13 00:07:21.683706 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 13 00:07:21.683713 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:07:21.683720 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:07:21.683728 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:07:21.683735 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:07:21.683741 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:07:21.683748 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Sep 13 00:07:21.683754 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:07:21.683761 kernel: cpuidle: using governor menu Sep 13 00:07:21.683767 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:07:21.683774 kernel: ASID allocator initialised with 32768 entries Sep 13 00:07:21.683780 kernel: ACPI: bus type PCI registered Sep 13 00:07:21.683788 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:07:21.683794 kernel: Serial: AMBA PL011 UART driver Sep 13 00:07:21.683801 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:07:21.683808 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:07:21.683814 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:07:21.683821 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:07:21.683828 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:07:21.683835 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:07:21.683846 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:07:21.683855 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:07:21.683862 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:07:21.683869 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:07:21.683876 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:07:21.683882 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:07:21.683889 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:07:21.683896 kernel: ACPI: Interpreter enabled Sep 13 00:07:21.683903 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:07:21.683909 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:07:21.683918 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:07:21.683925 kernel: printk: console [ttyAMA0] enabled Sep 13 00:07:21.683931 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:07:21.684063 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:07:21.684131 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:07:21.684189 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:07:21.684249 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 00:07:21.684314 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 00:07:21.684323 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 00:07:21.684330 kernel: PCI host bridge to bus 0000:00 Sep 13 00:07:21.684404 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 00:07:21.684513 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:07:21.684571 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 00:07:21.684624 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:07:21.684717 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 13 00:07:21.684792 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:07:21.684876 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 13 00:07:21.684939 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 13 00:07:21.685000 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:07:21.685060 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:07:21.685118 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 13 00:07:21.685181 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 13 00:07:21.685235 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 00:07:21.685287 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:07:21.685340 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 00:07:21.685353 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:07:21.685360 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:07:21.685368 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:07:21.685376 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:07:21.685383 kernel: iommu: Default domain type: Translated Sep 13 00:07:21.685390 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:07:21.685396 kernel: vgaarb: loaded Sep 13 00:07:21.685403 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:07:21.685410 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:07:21.685416 kernel: PTP clock support registered Sep 13 00:07:21.685423 kernel: Registered efivars operations Sep 13 00:07:21.685430 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:07:21.685436 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:07:21.685445 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:07:21.685451 kernel: pnp: PnP ACPI init Sep 13 00:07:21.685517 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 00:07:21.685527 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:07:21.685534 kernel: NET: Registered PF_INET protocol family Sep 13 00:07:21.685540 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:07:21.685547 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:07:21.685554 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:07:21.685562 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:07:21.685569 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:07:21.685575 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:07:21.685582 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:07:21.685589 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:07:21.685595 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:07:21.685602 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:07:21.685608 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 13 00:07:21.685615 kernel: kvm [1]: HYP mode not available Sep 13 00:07:21.685623 kernel: Initialise system trusted keyrings Sep 13 00:07:21.685629 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:07:21.685636 kernel: Key type asymmetric registered Sep 13 00:07:21.685642 kernel: Asymmetric key parser 'x509' registered Sep 13 00:07:21.685649 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:07:21.685662 kernel: io scheduler mq-deadline registered Sep 13 00:07:21.685670 kernel: io scheduler kyber registered Sep 13 00:07:21.685677 kernel: io scheduler bfq registered Sep 13 00:07:21.685684 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:07:21.685692 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:07:21.685699 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:07:21.685765 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 13 00:07:21.685774 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:07:21.685781 kernel: thunder_xcv, ver 1.0 Sep 13 00:07:21.685787 kernel: thunder_bgx, ver 1.0 Sep 13 00:07:21.685794 kernel: nicpf, ver 1.0 Sep 13 00:07:21.685800 kernel: nicvf, ver 1.0 Sep 13 00:07:21.685890 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:07:21.685953 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:07:21 UTC (1757722041) Sep 13 00:07:21.685962 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:07:21.685968 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:07:21.685975 kernel: Segment Routing with IPv6 Sep 13 00:07:21.685981 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:07:21.685988 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:07:21.685994 kernel: Key type dns_resolver registered Sep 13 00:07:21.686001 kernel: registered taskstats version 1 Sep 13 00:07:21.686009 kernel: Loading compiled-in X.509 certificates Sep 13 00:07:21.686016 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 47ac98e9306f36eebe4291d409359a5a5d0c2b9c' Sep 13 00:07:21.686023 kernel: Key type .fscrypt registered Sep 13 00:07:21.686029 kernel: Key type fscrypt-provisioning registered Sep 13 00:07:21.686036 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:07:21.686043 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:07:21.686050 kernel: ima: No architecture policies found Sep 13 00:07:21.686056 kernel: clk: Disabling unused clocks Sep 13 00:07:21.686063 kernel: Freeing unused kernel memory: 36416K Sep 13 00:07:21.686070 kernel: Run /init as init process Sep 13 00:07:21.686077 kernel: with arguments: Sep 13 00:07:21.686083 kernel: /init Sep 13 00:07:21.686090 kernel: with environment: Sep 13 00:07:21.686096 kernel: HOME=/ Sep 13 00:07:21.686102 kernel: TERM=linux Sep 13 00:07:21.686109 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:07:21.686117 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:07:21.686127 systemd[1]: Detected virtualization kvm. Sep 13 00:07:21.686135 systemd[1]: Detected architecture arm64. Sep 13 00:07:21.686141 systemd[1]: Running in initrd. Sep 13 00:07:21.686148 systemd[1]: No hostname configured, using default hostname. Sep 13 00:07:21.686155 systemd[1]: Hostname set to . Sep 13 00:07:21.686162 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:07:21.686169 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:07:21.686176 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:07:21.686184 systemd[1]: Reached target cryptsetup.target. Sep 13 00:07:21.686191 systemd[1]: Reached target paths.target. Sep 13 00:07:21.686198 systemd[1]: Reached target slices.target. Sep 13 00:07:21.686204 systemd[1]: Reached target swap.target. Sep 13 00:07:21.686211 systemd[1]: Reached target timers.target. Sep 13 00:07:21.686218 systemd[1]: Listening on iscsid.socket. Sep 13 00:07:21.686225 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:07:21.686234 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:07:21.686241 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:07:21.686248 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:07:21.686255 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:07:21.686262 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:07:21.686269 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:07:21.686276 systemd[1]: Reached target sockets.target. Sep 13 00:07:21.686283 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:07:21.686290 systemd[1]: Finished network-cleanup.service. Sep 13 00:07:21.686298 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:07:21.686305 systemd[1]: Starting systemd-journald.service... Sep 13 00:07:21.686312 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:07:21.686319 systemd[1]: Starting systemd-resolved.service... Sep 13 00:07:21.686326 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:07:21.686334 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:07:21.686340 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:07:21.686348 kernel: audit: type=1130 audit(1757722041.682:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.686359 systemd-journald[289]: Journal started Sep 13 00:07:21.686401 systemd-journald[289]: Runtime Journal (/run/log/journal/048e4054b57c4fa1b4f80059b914d084) is 6.0M, max 48.7M, 42.6M free. Sep 13 00:07:21.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.683437 systemd-modules-load[290]: Inserted module 'overlay' Sep 13 00:07:21.690341 systemd[1]: Started systemd-journald.service. Sep 13 00:07:21.690362 kernel: audit: type=1130 audit(1757722041.686:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.691274 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:07:21.696405 kernel: audit: type=1130 audit(1757722041.690:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.696426 kernel: audit: type=1130 audit(1757722041.691:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.697279 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:07:21.698726 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:07:21.705872 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:07:21.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.706248 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:07:21.708372 systemd-resolved[291]: Positive Trust Anchors: Sep 13 00:07:21.710861 kernel: audit: type=1130 audit(1757722041.706:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.710880 kernel: Bridge firewalling registered Sep 13 00:07:21.708379 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:07:21.708408 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:07:21.710868 systemd-modules-load[290]: Inserted module 'br_netfilter' Sep 13 00:07:21.715218 systemd-resolved[291]: Defaulting to hostname 'linux'. Sep 13 00:07:21.716104 systemd[1]: Started systemd-resolved.service. Sep 13 00:07:21.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.717159 systemd[1]: Reached target nss-lookup.target. Sep 13 00:07:21.721234 kernel: audit: type=1130 audit(1757722041.716:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.720905 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:07:21.725147 kernel: audit: type=1130 audit(1757722041.720:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.725164 kernel: SCSI subsystem initialized Sep 13 00:07:21.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.722675 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:07:21.731995 dracut-cmdline[308]: dracut-dracut-053 Sep 13 00:07:21.733878 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:07:21.733901 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:07:21.733917 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:07:21.734443 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:07:21.738391 systemd-modules-load[290]: Inserted module 'dm_multipath' Sep 13 00:07:21.739233 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:07:21.741426 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:07:21.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.744879 kernel: audit: type=1130 audit(1757722041.740:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.748781 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:07:21.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.752891 kernel: audit: type=1130 audit(1757722041.749:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.795868 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:07:21.807887 kernel: iscsi: registered transport (tcp) Sep 13 00:07:21.823861 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:07:21.823885 kernel: QLogic iSCSI HBA Driver Sep 13 00:07:21.859161 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:07:21.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:21.861576 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:07:21.906874 kernel: raid6: neonx8 gen() 13762 MB/s Sep 13 00:07:21.923860 kernel: raid6: neonx8 xor() 10839 MB/s Sep 13 00:07:21.940869 kernel: raid6: neonx4 gen() 13479 MB/s Sep 13 00:07:21.957863 kernel: raid6: neonx4 xor() 11186 MB/s Sep 13 00:07:21.974859 kernel: raid6: neonx2 gen() 12966 MB/s Sep 13 00:07:21.991863 kernel: raid6: neonx2 xor() 10395 MB/s Sep 13 00:07:22.008861 kernel: raid6: neonx1 gen() 10517 MB/s Sep 13 00:07:22.025866 kernel: raid6: neonx1 xor() 8784 MB/s Sep 13 00:07:22.042867 kernel: raid6: int64x8 gen() 6268 MB/s Sep 13 00:07:22.059865 kernel: raid6: int64x8 xor() 3530 MB/s Sep 13 00:07:22.076864 kernel: raid6: int64x4 gen() 7230 MB/s Sep 13 00:07:22.093856 kernel: raid6: int64x4 xor() 3852 MB/s Sep 13 00:07:22.110875 kernel: raid6: int64x2 gen() 6150 MB/s Sep 13 00:07:22.127857 kernel: raid6: int64x2 xor() 3319 MB/s Sep 13 00:07:22.144860 kernel: raid6: int64x1 gen() 5043 MB/s Sep 13 00:07:22.162137 kernel: raid6: int64x1 xor() 2642 MB/s Sep 13 00:07:22.162152 kernel: raid6: using algorithm neonx8 gen() 13762 MB/s Sep 13 00:07:22.162161 kernel: raid6: .... xor() 10839 MB/s, rmw enabled Sep 13 00:07:22.162169 kernel: raid6: using neon recovery algorithm Sep 13 00:07:22.173097 kernel: xor: measuring software checksum speed Sep 13 00:07:22.173121 kernel: 8regs : 16776 MB/sec Sep 13 00:07:22.174154 kernel: 32regs : 20728 MB/sec Sep 13 00:07:22.174165 kernel: arm64_neon : 26972 MB/sec Sep 13 00:07:22.174173 kernel: xor: using function: arm64_neon (26972 MB/sec) Sep 13 00:07:22.226871 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 13 00:07:22.238015 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:07:22.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:22.238000 audit: BPF prog-id=7 op=LOAD Sep 13 00:07:22.238000 audit: BPF prog-id=8 op=LOAD Sep 13 00:07:22.239592 systemd[1]: Starting systemd-udevd.service... Sep 13 00:07:22.252081 systemd-udevd[491]: Using default interface naming scheme 'v252'. Sep 13 00:07:22.256101 systemd[1]: Started systemd-udevd.service. Sep 13 00:07:22.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:22.258069 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:07:22.269237 dracut-pre-trigger[501]: rd.md=0: removing MD RAID activation Sep 13 00:07:22.298379 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:07:22.299833 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:07:22.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:22.333303 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:07:22.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:22.367738 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:07:22.371867 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:07:22.371892 kernel: GPT:9289727 != 19775487 Sep 13 00:07:22.371901 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:07:22.371910 kernel: GPT:9289727 != 19775487 Sep 13 00:07:22.371918 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:07:22.371927 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:07:22.390072 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:07:22.394646 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:07:22.395962 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:07:22.398346 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (551) Sep 13 00:07:22.399828 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:07:22.407408 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:07:22.408999 systemd[1]: Starting disk-uuid.service... Sep 13 00:07:22.414867 disk-uuid[564]: Primary Header is updated. Sep 13 00:07:22.414867 disk-uuid[564]: Secondary Entries is updated. Sep 13 00:07:22.414867 disk-uuid[564]: Secondary Header is updated. Sep 13 00:07:22.417863 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:07:22.420869 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:07:22.423863 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:07:23.424871 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:07:23.425557 disk-uuid[565]: The operation has completed successfully. Sep 13 00:07:23.448694 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:07:23.448789 systemd[1]: Finished disk-uuid.service. Sep 13 00:07:23.450344 systemd[1]: Starting verity-setup.service... Sep 13 00:07:23.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.462866 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:07:23.483047 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:07:23.485006 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:07:23.486979 systemd[1]: Finished verity-setup.service. Sep 13 00:07:23.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.533855 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:07:23.534060 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:07:23.534753 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:07:23.535514 systemd[1]: Starting ignition-setup.service... Sep 13 00:07:23.537236 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:07:23.545193 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:07:23.545229 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:07:23.545239 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:07:23.553305 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:07:23.559694 systemd[1]: Finished ignition-setup.service. Sep 13 00:07:23.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.561279 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:07:23.614442 ignition[652]: Ignition 2.14.0 Sep 13 00:07:23.614451 ignition[652]: Stage: fetch-offline Sep 13 00:07:23.614487 ignition[652]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:07:23.614497 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:07:23.614660 ignition[652]: parsed url from cmdline: "" Sep 13 00:07:23.614663 ignition[652]: no config URL provided Sep 13 00:07:23.614668 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:07:23.614675 ignition[652]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:07:23.614693 ignition[652]: op(1): [started] loading QEMU firmware config module Sep 13 00:07:23.614698 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:07:23.620540 ignition[652]: op(1): [finished] loading QEMU firmware config module Sep 13 00:07:23.620562 ignition[652]: QEMU firmware config was not found. Ignoring... Sep 13 00:07:23.628207 ignition[652]: parsing config with SHA512: 43baa16e7d922604c19b26d9a4c292f38b466d75f26e7c5fc5f6bbf3a323a7d59875b03d3e7b7a1a436d1c786ded85338f7c888e9cef368d0687bb21a3a9fb0a Sep 13 00:07:23.631128 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:07:23.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.632000 audit: BPF prog-id=9 op=LOAD Sep 13 00:07:23.633124 systemd[1]: Starting systemd-networkd.service... Sep 13 00:07:23.638548 unknown[652]: fetched base config from "system" Sep 13 00:07:23.638559 unknown[652]: fetched user config from "qemu" Sep 13 00:07:23.638936 ignition[652]: fetch-offline: fetch-offline passed Sep 13 00:07:23.639002 ignition[652]: Ignition finished successfully Sep 13 00:07:23.642101 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:07:23.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.656584 systemd-networkd[743]: lo: Link UP Sep 13 00:07:23.656597 systemd-networkd[743]: lo: Gained carrier Sep 13 00:07:23.657345 systemd-networkd[743]: Enumeration completed Sep 13 00:07:23.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.657496 systemd[1]: Started systemd-networkd.service. Sep 13 00:07:23.657792 systemd-networkd[743]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:07:23.658777 systemd[1]: Reached target network.target. Sep 13 00:07:23.659287 systemd-networkd[743]: eth0: Link UP Sep 13 00:07:23.659291 systemd-networkd[743]: eth0: Gained carrier Sep 13 00:07:23.660232 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:07:23.661079 systemd[1]: Starting ignition-kargs.service... Sep 13 00:07:23.663172 systemd[1]: Starting iscsiuio.service... Sep 13 00:07:23.670444 ignition[745]: Ignition 2.14.0 Sep 13 00:07:23.670453 ignition[745]: Stage: kargs Sep 13 00:07:23.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.670726 systemd[1]: Started iscsiuio.service. Sep 13 00:07:23.670557 ignition[745]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:07:23.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.672372 systemd[1]: Starting iscsid.service... Sep 13 00:07:23.670567 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:07:23.676624 iscsid[754]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:07:23.676624 iscsid[754]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 00:07:23.676624 iscsid[754]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:07:23.676624 iscsid[754]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:07:23.676624 iscsid[754]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:07:23.676624 iscsid[754]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:07:23.676624 iscsid[754]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:07:23.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.673078 systemd[1]: Finished ignition-kargs.service. Sep 13 00:07:23.671246 ignition[745]: kargs: kargs passed Sep 13 00:07:23.674802 systemd[1]: Starting ignition-disks.service... Sep 13 00:07:23.671286 ignition[745]: Ignition finished successfully Sep 13 00:07:23.679461 systemd[1]: Started iscsid.service. Sep 13 00:07:23.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.683283 ignition[755]: Ignition 2.14.0 Sep 13 00:07:23.680365 systemd-networkd[743]: eth0: DHCPv4 address 10.0.0.29/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:07:23.683289 ignition[755]: Stage: disks Sep 13 00:07:23.682773 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:07:23.683388 ignition[755]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:07:23.685740 systemd[1]: Finished ignition-disks.service. Sep 13 00:07:23.683398 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:07:23.687249 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:07:23.684148 ignition[755]: disks: disks passed Sep 13 00:07:23.688762 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:07:23.684198 ignition[755]: Ignition finished successfully Sep 13 00:07:23.690607 systemd[1]: Reached target local-fs.target. Sep 13 00:07:23.691726 systemd[1]: Reached target sysinit.target. Sep 13 00:07:23.692715 systemd[1]: Reached target basic.target. Sep 13 00:07:23.694143 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:07:23.695101 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:07:23.695985 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:07:23.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.697003 systemd[1]: Reached target remote-fs.target. Sep 13 00:07:23.699248 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:07:23.707192 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:07:23.708895 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:07:23.720305 systemd-fsck[776]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 13 00:07:23.723626 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:07:23.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.725795 systemd[1]: Mounting sysroot.mount... Sep 13 00:07:23.732589 systemd[1]: Mounted sysroot.mount. Sep 13 00:07:23.733700 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:07:23.733339 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:07:23.735367 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:07:23.736190 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:07:23.736232 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:07:23.736256 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:07:23.738246 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:07:23.741110 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:07:23.745718 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:07:23.749611 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:07:23.754170 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:07:23.758127 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:07:23.785584 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:07:23.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.787194 systemd[1]: Starting ignition-mount.service... Sep 13 00:07:23.788384 systemd[1]: Starting sysroot-boot.service... Sep 13 00:07:23.792555 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:07:23.801471 ignition[829]: INFO : Ignition 2.14.0 Sep 13 00:07:23.801471 ignition[829]: INFO : Stage: mount Sep 13 00:07:23.803458 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:07:23.803458 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:07:23.803458 ignition[829]: INFO : mount: mount passed Sep 13 00:07:23.803458 ignition[829]: INFO : Ignition finished successfully Sep 13 00:07:23.806287 systemd[1]: Finished ignition-mount.service. Sep 13 00:07:23.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:23.808284 systemd[1]: Finished sysroot-boot.service. Sep 13 00:07:23.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:24.493793 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:07:24.500246 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (837) Sep 13 00:07:24.500277 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:07:24.500293 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:07:24.501189 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:07:24.503882 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:07:24.505283 systemd[1]: Starting ignition-files.service... Sep 13 00:07:24.519036 ignition[857]: INFO : Ignition 2.14.0 Sep 13 00:07:24.519036 ignition[857]: INFO : Stage: files Sep 13 00:07:24.520386 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:07:24.520386 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:07:24.520386 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:07:24.523150 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:07:24.523150 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:07:24.523150 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:07:24.523150 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:07:24.523150 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:07:24.523038 unknown[857]: wrote ssh authorized keys file for user: core Sep 13 00:07:24.529537 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:07:24.529537 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:07:24.529537 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:07:24.529537 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:07:24.529537 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:07:24.529537 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:07:24.529537 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:07:24.529537 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 13 00:07:24.941641 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Sep 13 00:07:25.229956 systemd-networkd[743]: eth0: Gained IPv6LL Sep 13 00:07:25.342784 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:07:25.342784 ignition[857]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Sep 13 00:07:25.346092 ignition[857]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:07:25.346092 ignition[857]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:07:25.346092 ignition[857]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Sep 13 00:07:25.346092 ignition[857]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:07:25.346092 ignition[857]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:07:25.366398 ignition[857]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:07:25.368531 ignition[857]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:07:25.368531 ignition[857]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:07:25.368531 ignition[857]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:07:25.368531 ignition[857]: INFO : files: files passed Sep 13 00:07:25.368531 ignition[857]: INFO : Ignition finished successfully Sep 13 00:07:25.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.368713 systemd[1]: Finished ignition-files.service. Sep 13 00:07:25.371226 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:07:25.376814 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 13 00:07:25.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.372161 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:07:25.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.380291 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:07:25.372797 systemd[1]: Starting ignition-quench.service... Sep 13 00:07:25.376641 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:07:25.376723 systemd[1]: Finished ignition-quench.service. Sep 13 00:07:25.378066 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:07:25.379107 systemd[1]: Reached target ignition-complete.target. Sep 13 00:07:25.381402 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:07:25.394290 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:07:25.394391 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:07:25.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.395729 systemd[1]: Reached target initrd-fs.target. Sep 13 00:07:25.396606 systemd[1]: Reached target initrd.target. Sep 13 00:07:25.397651 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:07:25.398463 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:07:25.408766 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:07:25.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.410247 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:07:25.418338 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:07:25.419052 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:07:25.420115 systemd[1]: Stopped target timers.target. Sep 13 00:07:25.421120 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:07:25.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.421221 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:07:25.422206 systemd[1]: Stopped target initrd.target. Sep 13 00:07:25.423221 systemd[1]: Stopped target basic.target. Sep 13 00:07:25.424200 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:07:25.425226 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:07:25.426238 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:07:25.427368 systemd[1]: Stopped target remote-fs.target. Sep 13 00:07:25.428434 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:07:25.429488 systemd[1]: Stopped target sysinit.target. Sep 13 00:07:25.430437 systemd[1]: Stopped target local-fs.target. Sep 13 00:07:25.431492 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:07:25.432575 systemd[1]: Stopped target swap.target. Sep 13 00:07:25.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.433477 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:07:25.433580 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:07:25.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.434597 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:07:25.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.435507 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:07:25.435599 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:07:25.436703 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:07:25.436792 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:07:25.437770 systemd[1]: Stopped target paths.target. Sep 13 00:07:25.438661 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:07:25.442892 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:07:25.443630 systemd[1]: Stopped target slices.target. Sep 13 00:07:25.444672 systemd[1]: Stopped target sockets.target. Sep 13 00:07:25.445614 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:07:25.445693 systemd[1]: Closed iscsid.socket. Sep 13 00:07:25.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.446628 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:07:25.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.446724 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:07:25.447807 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:07:25.447905 systemd[1]: Stopped ignition-files.service. Sep 13 00:07:25.449543 systemd[1]: Stopping ignition-mount.service... Sep 13 00:07:25.451328 systemd[1]: Stopping iscsiuio.service... Sep 13 00:07:25.454393 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:07:25.454957 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:07:25.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.456650 ignition[897]: INFO : Ignition 2.14.0 Sep 13 00:07:25.456650 ignition[897]: INFO : Stage: umount Sep 13 00:07:25.456650 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:07:25.456650 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:07:25.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.455093 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:07:25.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.461819 ignition[897]: INFO : umount: umount passed Sep 13 00:07:25.461819 ignition[897]: INFO : Ignition finished successfully Sep 13 00:07:25.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.456129 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:07:25.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.456229 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:07:25.458736 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:07:25.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.458823 systemd[1]: Stopped iscsiuio.service. Sep 13 00:07:25.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.460321 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:07:25.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.460386 systemd[1]: Closed iscsiuio.socket. Sep 13 00:07:25.462072 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:07:25.462156 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:07:25.463291 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:07:25.463364 systemd[1]: Stopped ignition-mount.service. Sep 13 00:07:25.464509 systemd[1]: Stopped target network.target. Sep 13 00:07:25.465718 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:07:25.465775 systemd[1]: Stopped ignition-disks.service. Sep 13 00:07:25.466919 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:07:25.466958 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:07:25.468731 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:07:25.468767 systemd[1]: Stopped ignition-setup.service. Sep 13 00:07:25.469894 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:07:25.471534 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:07:25.473242 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:07:25.481899 systemd-networkd[743]: eth0: DHCPv6 lease lost Sep 13 00:07:25.483493 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:07:25.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.483592 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:07:25.485709 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:07:25.485739 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:07:25.487342 systemd[1]: Stopping network-cleanup.service... Sep 13 00:07:25.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.488421 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:07:25.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.488478 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:07:25.491000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:07:25.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.489792 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:07:25.489832 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:07:25.491676 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:07:25.491718 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:07:25.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.492704 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:07:25.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.496977 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:07:25.497592 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:07:25.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.497741 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:07:25.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.503000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:07:25.499046 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:07:25.499127 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:07:25.500714 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:07:25.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.500874 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:07:25.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.502400 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:07:25.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.502474 systemd[1]: Stopped network-cleanup.service. Sep 13 00:07:25.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.503624 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:07:25.503662 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:07:25.504522 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:07:25.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.504554 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:07:25.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.505656 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:07:25.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.505701 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:07:25.506709 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:07:25.506748 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:07:25.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.507990 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:07:25.508050 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:07:25.509005 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:07:25.509047 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:07:25.510998 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:07:25.512018 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:07:25.512076 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 00:07:25.513867 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:07:25.513909 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:07:25.514605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:07:25.514653 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:07:25.516725 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 00:07:25.517347 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:07:25.517442 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:07:25.518724 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:07:25.520706 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:07:25.527917 systemd[1]: Switching root. Sep 13 00:07:25.549245 iscsid[754]: iscsid shutting down. Sep 13 00:07:25.549862 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Sep 13 00:07:25.549905 systemd-journald[289]: Journal stopped Sep 13 00:07:27.568671 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:07:27.568727 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:07:27.568739 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:07:27.568752 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:07:27.568761 kernel: SELinux: policy capability open_perms=1 Sep 13 00:07:27.568770 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:07:27.568780 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:07:27.568789 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:07:27.568798 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:07:27.568809 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:07:27.568819 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:07:27.568829 systemd[1]: Successfully loaded SELinux policy in 32.194ms. Sep 13 00:07:27.568864 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.898ms. Sep 13 00:07:27.568877 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:07:27.568891 systemd[1]: Detected virtualization kvm. Sep 13 00:07:27.568902 systemd[1]: Detected architecture arm64. Sep 13 00:07:27.568912 systemd[1]: Detected first boot. Sep 13 00:07:27.568922 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:07:27.568934 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:07:27.568943 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:07:27.568958 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:07:27.568969 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:07:27.568980 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:27.568991 kernel: kauditd_printk_skb: 82 callbacks suppressed Sep 13 00:07:27.569002 kernel: audit: type=1334 audit(1757722047.454:86): prog-id=12 op=LOAD Sep 13 00:07:27.569013 kernel: audit: type=1334 audit(1757722047.454:87): prog-id=3 op=UNLOAD Sep 13 00:07:27.569023 kernel: audit: type=1334 audit(1757722047.454:88): prog-id=13 op=LOAD Sep 13 00:07:27.569032 kernel: audit: type=1334 audit(1757722047.455:89): prog-id=14 op=LOAD Sep 13 00:07:27.569043 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:07:27.569052 kernel: audit: type=1334 audit(1757722047.455:90): prog-id=4 op=UNLOAD Sep 13 00:07:27.569062 kernel: audit: type=1334 audit(1757722047.455:91): prog-id=5 op=UNLOAD Sep 13 00:07:27.569073 kernel: audit: type=1131 audit(1757722047.456:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.569084 systemd[1]: Stopped iscsid.service. Sep 13 00:07:27.569095 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:07:27.569105 kernel: audit: type=1131 audit(1757722047.462:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.569115 systemd[1]: Stopped initrd-switch-root.service. Sep 13 00:07:27.569126 kernel: audit: type=1130 audit(1757722047.466:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.569138 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:07:27.569151 kernel: audit: type=1131 audit(1757722047.466:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.569170 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:07:27.569180 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:07:27.569192 systemd[1]: Created slice system-getty.slice. Sep 13 00:07:27.569202 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:07:27.569214 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:07:27.569224 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:07:27.569236 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:07:27.569247 systemd[1]: Created slice user.slice. Sep 13 00:07:27.569262 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:07:27.569273 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:07:27.569284 systemd[1]: Set up automount boot.automount. Sep 13 00:07:27.569296 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:07:27.569307 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 00:07:27.569318 systemd[1]: Stopped target initrd-fs.target. Sep 13 00:07:27.569328 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 00:07:27.569339 systemd[1]: Reached target integritysetup.target. Sep 13 00:07:27.569351 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:07:27.569361 systemd[1]: Reached target remote-fs.target. Sep 13 00:07:27.569372 systemd[1]: Reached target slices.target. Sep 13 00:07:27.569382 systemd[1]: Reached target swap.target. Sep 13 00:07:27.569392 systemd[1]: Reached target torcx.target. Sep 13 00:07:27.569402 systemd[1]: Reached target veritysetup.target. Sep 13 00:07:27.569413 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:07:27.569423 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:07:27.569434 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:07:27.569445 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:07:27.569457 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:07:27.569470 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:07:27.569480 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:07:27.569491 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:07:27.569501 systemd[1]: Mounting media.mount... Sep 13 00:07:27.569511 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:07:27.569522 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:07:27.569532 systemd[1]: Mounting tmp.mount... Sep 13 00:07:27.569542 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:07:27.569554 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:07:27.569565 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:07:27.569576 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:07:27.569586 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:07:27.569605 systemd[1]: Starting modprobe@drm.service... Sep 13 00:07:27.569617 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:07:27.569627 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:07:27.569638 systemd[1]: Starting modprobe@loop.service... Sep 13 00:07:27.569648 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:07:27.569660 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:07:27.569671 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 00:07:27.569681 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:07:27.569692 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:07:27.569703 kernel: loop: module loaded Sep 13 00:07:27.569713 systemd[1]: Stopped systemd-journald.service. Sep 13 00:07:27.569723 kernel: fuse: init (API version 7.34) Sep 13 00:07:27.569741 systemd[1]: Starting systemd-journald.service... Sep 13 00:07:27.569766 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:07:27.569778 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:07:27.569789 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:07:27.569799 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:07:27.569810 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:07:27.569820 systemd[1]: Stopped verity-setup.service. Sep 13 00:07:27.569831 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:07:27.569841 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:07:27.569869 systemd[1]: Mounted media.mount. Sep 13 00:07:27.569882 systemd-journald[1001]: Journal started Sep 13 00:07:27.569927 systemd-journald[1001]: Runtime Journal (/run/log/journal/048e4054b57c4fa1b4f80059b914d084) is 6.0M, max 48.7M, 42.6M free. Sep 13 00:07:25.604000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:07:25.682000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:07:25.682000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:07:25.682000 audit: BPF prog-id=10 op=LOAD Sep 13 00:07:25.682000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:07:25.682000 audit: BPF prog-id=11 op=LOAD Sep 13 00:07:25.682000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:07:25.722000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:07:25.722000 audit[930]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022544 a1=4000028510 a2=4000026a00 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:07:25.722000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:07:25.723000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:07:25.723000 audit[930]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022619 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:07:25.723000 audit: CWD cwd="/" Sep 13 00:07:25.723000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:07:25.723000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:07:25.723000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:07:27.454000 audit: BPF prog-id=12 op=LOAD Sep 13 00:07:27.454000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:07:27.454000 audit: BPF prog-id=13 op=LOAD Sep 13 00:07:27.455000 audit: BPF prog-id=14 op=LOAD Sep 13 00:07:27.455000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:07:27.455000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:07:27.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.472000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:07:27.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.549000 audit: BPF prog-id=15 op=LOAD Sep 13 00:07:27.549000 audit: BPF prog-id=16 op=LOAD Sep 13 00:07:27.549000 audit: BPF prog-id=17 op=LOAD Sep 13 00:07:27.549000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:07:27.549000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:07:27.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.566000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:07:27.566000 audit[1001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc6efeeb0 a2=4000 a3=1 items=0 ppid=1 pid=1001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:07:27.566000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:07:25.720770 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:07:27.453255 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:07:25.721181 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:07:27.453268 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:07:25.721201 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:07:27.571182 systemd[1]: Started systemd-journald.service. Sep 13 00:07:27.457180 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:07:25.721230 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 00:07:25.721239 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 00:07:25.721269 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 00:07:25.721281 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 00:07:25.721464 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 00:07:25.721511 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:07:25.721524 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:07:25.722568 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 00:07:27.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:25.722602 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 00:07:25.722636 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 00:07:25.722653 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 00:07:25.722673 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 00:07:27.572256 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:07:25.722686 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:25Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 00:07:27.195492 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:07:27.195768 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:07:27.195913 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:07:27.196087 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:07:27.196138 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 00:07:27.196195 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-09-13T00:07:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 00:07:27.573211 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:07:27.573966 systemd[1]: Mounted tmp.mount. Sep 13 00:07:27.575093 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:07:27.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.576013 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:07:27.576178 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:07:27.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.577097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:07:27.577259 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:07:27.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.578185 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:07:27.578346 systemd[1]: Finished modprobe@drm.service. Sep 13 00:07:27.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.579321 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:07:27.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.580392 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:07:27.580551 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:07:27.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.581695 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:07:27.581876 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:07:27.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.582733 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:07:27.582921 systemd[1]: Finished modprobe@loop.service. Sep 13 00:07:27.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.583841 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:07:27.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.584959 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:07:27.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.586175 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:07:27.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.587423 systemd[1]: Reached target network-pre.target. Sep 13 00:07:27.589566 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:07:27.591547 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:07:27.592206 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:07:27.593720 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:07:27.595704 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:07:27.596608 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:07:27.597682 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:07:27.598484 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:07:27.599524 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:07:27.602036 systemd-journald[1001]: Time spent on flushing to /var/log/journal/048e4054b57c4fa1b4f80059b914d084 is 16.616ms for 978 entries. Sep 13 00:07:27.602036 systemd-journald[1001]: System Journal (/var/log/journal/048e4054b57c4fa1b4f80059b914d084) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:07:27.634608 systemd-journald[1001]: Received client request to flush runtime journal. Sep 13 00:07:27.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.601378 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:07:27.604397 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:07:27.635832 udevadm[1029]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:07:27.605776 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:07:27.607373 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:07:27.608359 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:07:27.612111 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:07:27.614693 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:07:27.620435 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:07:27.624075 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:07:27.628054 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:07:27.635553 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:07:27.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.643120 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:07:27.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.988714 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:07:27.990871 systemd[1]: Starting systemd-udevd.service... Sep 13 00:07:27.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:27.989000 audit: BPF prog-id=18 op=LOAD Sep 13 00:07:27.989000 audit: BPF prog-id=19 op=LOAD Sep 13 00:07:27.989000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:07:27.989000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:07:28.006437 systemd-udevd[1035]: Using default interface naming scheme 'v252'. Sep 13 00:07:28.020585 systemd[1]: Started systemd-udevd.service. Sep 13 00:07:28.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.021000 audit: BPF prog-id=20 op=LOAD Sep 13 00:07:28.022916 systemd[1]: Starting systemd-networkd.service... Sep 13 00:07:28.028000 audit: BPF prog-id=21 op=LOAD Sep 13 00:07:28.028000 audit: BPF prog-id=22 op=LOAD Sep 13 00:07:28.028000 audit: BPF prog-id=23 op=LOAD Sep 13 00:07:28.029887 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:07:28.040758 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 13 00:07:28.054838 systemd[1]: Started systemd-userdbd.service. Sep 13 00:07:28.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.074479 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:07:28.098344 systemd-networkd[1043]: lo: Link UP Sep 13 00:07:28.098355 systemd-networkd[1043]: lo: Gained carrier Sep 13 00:07:28.098740 systemd-networkd[1043]: Enumeration completed Sep 13 00:07:28.098834 systemd[1]: Started systemd-networkd.service. Sep 13 00:07:28.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.099751 systemd-networkd[1043]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:07:28.101042 systemd-networkd[1043]: eth0: Link UP Sep 13 00:07:28.101053 systemd-networkd[1043]: eth0: Gained carrier Sep 13 00:07:28.122974 systemd-networkd[1043]: eth0: DHCPv4 address 10.0.0.29/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:07:28.128214 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:07:28.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.130166 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:07:28.138750 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:07:28.166710 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:07:28.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.167631 systemd[1]: Reached target cryptsetup.target. Sep 13 00:07:28.169507 systemd[1]: Starting lvm2-activation.service... Sep 13 00:07:28.173352 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:07:28.204824 systemd[1]: Finished lvm2-activation.service. Sep 13 00:07:28.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.205607 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:07:28.206304 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:07:28.206330 systemd[1]: Reached target local-fs.target. Sep 13 00:07:28.206914 systemd[1]: Reached target machines.target. Sep 13 00:07:28.208718 systemd[1]: Starting ldconfig.service... Sep 13 00:07:28.209659 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:07:28.209718 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:07:28.210793 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:07:28.212619 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:07:28.214524 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:07:28.216460 systemd[1]: Starting systemd-sysext.service... Sep 13 00:07:28.217434 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) Sep 13 00:07:28.218549 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:07:28.230801 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:07:28.235072 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:07:28.235299 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:07:28.238904 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:07:28.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.248884 kernel: loop0: detected capacity change from 0 to 211168 Sep 13 00:07:28.298963 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:07:28.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.305326 systemd-fsck[1079]: fsck.fat 4.2 (2021-01-31) Sep 13 00:07:28.305326 systemd-fsck[1079]: /dev/vda1: 236 files, 117310/258078 clusters Sep 13 00:07:28.306868 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:07:28.308456 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:07:28.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.328938 kernel: loop1: detected capacity change from 0 to 211168 Sep 13 00:07:28.334967 (sd-sysext)[1083]: Using extensions 'kubernetes'. Sep 13 00:07:28.335561 (sd-sysext)[1083]: Merged extensions into '/usr'. Sep 13 00:07:28.355444 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:07:28.356759 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:07:28.358717 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:07:28.360525 systemd[1]: Starting modprobe@loop.service... Sep 13 00:07:28.361311 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:07:28.361441 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:07:28.362197 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:07:28.362320 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:07:28.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.363535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:07:28.363650 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:07:28.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.364955 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:07:28.365063 systemd[1]: Finished modprobe@loop.service. Sep 13 00:07:28.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.366221 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:07:28.366315 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:07:28.396156 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:07:28.399787 systemd[1]: Finished ldconfig.service. Sep 13 00:07:28.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.566435 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:07:28.568272 systemd[1]: Mounting boot.mount... Sep 13 00:07:28.570011 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:07:28.575834 systemd[1]: Mounted boot.mount. Sep 13 00:07:28.576629 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:07:28.578429 systemd[1]: Finished systemd-sysext.service. Sep 13 00:07:28.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.581102 systemd[1]: Starting ensure-sysext.service... Sep 13 00:07:28.582724 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:07:28.583875 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:07:28.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.587750 systemd[1]: Reloading. Sep 13 00:07:28.592191 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:07:28.593194 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:07:28.594459 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:07:28.620660 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-09-13T00:07:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:07:28.620688 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-09-13T00:07:28Z" level=info msg="torcx already run" Sep 13 00:07:28.682212 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:07:28.682231 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:07:28.698161 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:28.738000 audit: BPF prog-id=24 op=LOAD Sep 13 00:07:28.738000 audit: BPF prog-id=25 op=LOAD Sep 13 00:07:28.738000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:07:28.738000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:07:28.740000 audit: BPF prog-id=26 op=LOAD Sep 13 00:07:28.740000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:07:28.741000 audit: BPF prog-id=27 op=LOAD Sep 13 00:07:28.741000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:07:28.741000 audit: BPF prog-id=28 op=LOAD Sep 13 00:07:28.741000 audit: BPF prog-id=29 op=LOAD Sep 13 00:07:28.741000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:07:28.741000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:07:28.742000 audit: BPF prog-id=30 op=LOAD Sep 13 00:07:28.742000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:07:28.742000 audit: BPF prog-id=31 op=LOAD Sep 13 00:07:28.742000 audit: BPF prog-id=32 op=LOAD Sep 13 00:07:28.742000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:07:28.742000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:07:28.745112 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:07:28.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.749431 systemd[1]: Starting audit-rules.service... Sep 13 00:07:28.751411 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:07:28.753781 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:07:28.754000 audit: BPF prog-id=33 op=LOAD Sep 13 00:07:28.756230 systemd[1]: Starting systemd-resolved.service... Sep 13 00:07:28.756000 audit: BPF prog-id=34 op=LOAD Sep 13 00:07:28.758321 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:07:28.760497 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:07:28.762145 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:07:28.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.764783 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:07:28.767523 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:07:28.768757 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:07:28.767000 audit[1163]: SYSTEM_BOOT pid=1163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.770571 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:07:28.772362 systemd[1]: Starting modprobe@loop.service... Sep 13 00:07:28.772997 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:07:28.773146 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:07:28.773283 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:07:28.774076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:07:28.774201 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:07:28.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.775263 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:07:28.775383 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:07:28.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.776522 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:07:28.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.777731 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:07:28.777858 systemd[1]: Finished modprobe@loop.service. Sep 13 00:07:28.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.782004 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:07:28.783304 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:07:28.785056 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:07:28.786714 systemd[1]: Starting modprobe@loop.service... Sep 13 00:07:28.787401 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:07:28.787523 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:07:28.788745 systemd[1]: Starting systemd-update-done.service... Sep 13 00:07:28.789502 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:07:28.790645 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:07:28.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.791870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:07:28.791981 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:07:28.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.793028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:07:28.793223 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:07:28.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.794272 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:07:28.794383 systemd[1]: Finished modprobe@loop.service. Sep 13 00:07:28.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.795643 systemd[1]: Finished systemd-update-done.service. Sep 13 00:07:28.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:28.802007 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:07:28.803181 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:07:28.804788 augenrules[1179]: No rules Sep 13 00:07:28.803000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:07:28.803000 audit[1179]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff43d09a0 a2=420 a3=0 items=0 ppid=1152 pid=1179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:07:28.803000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:07:28.805215 systemd[1]: Starting modprobe@drm.service... Sep 13 00:07:28.807065 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:07:28.809159 systemd[1]: Starting modprobe@loop.service... Sep 13 00:07:28.809797 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:07:28.809931 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:07:28.811138 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:07:28.812291 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:07:28.812686 systemd-timesyncd[1159]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:07:28.812731 systemd-timesyncd[1159]: Initial clock synchronization to Sat 2025-09-13 00:07:28.895407 UTC. Sep 13 00:07:28.813402 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:07:28.814950 systemd-resolved[1156]: Positive Trust Anchors: Sep 13 00:07:28.814957 systemd-resolved[1156]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:07:28.814984 systemd-resolved[1156]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:07:28.815009 systemd[1]: Finished audit-rules.service. Sep 13 00:07:28.816164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:07:28.816312 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:07:28.817771 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:07:28.817929 systemd[1]: Finished modprobe@drm.service. Sep 13 00:07:28.819251 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:07:28.819462 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:07:28.820740 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:07:28.820875 systemd[1]: Finished modprobe@loop.service. Sep 13 00:07:28.822154 systemd[1]: Reached target time-set.target. Sep 13 00:07:28.822768 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:07:28.822808 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:07:28.823209 systemd[1]: Finished ensure-sysext.service. Sep 13 00:07:28.825982 systemd-resolved[1156]: Defaulting to hostname 'linux'. Sep 13 00:07:28.827361 systemd[1]: Started systemd-resolved.service. Sep 13 00:07:28.828160 systemd[1]: Reached target network.target. Sep 13 00:07:28.828736 systemd[1]: Reached target nss-lookup.target. Sep 13 00:07:28.829385 systemd[1]: Reached target sysinit.target. Sep 13 00:07:28.830023 systemd[1]: Started motdgen.path. Sep 13 00:07:28.830554 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:07:28.831558 systemd[1]: Started logrotate.timer. Sep 13 00:07:28.832349 systemd[1]: Started mdadm.timer. Sep 13 00:07:28.832877 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:07:28.833486 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:07:28.833512 systemd[1]: Reached target paths.target. Sep 13 00:07:28.834081 systemd[1]: Reached target timers.target. Sep 13 00:07:28.834951 systemd[1]: Listening on dbus.socket. Sep 13 00:07:28.836490 systemd[1]: Starting docker.socket... Sep 13 00:07:28.839555 systemd[1]: Listening on sshd.socket. Sep 13 00:07:28.840342 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:07:28.840794 systemd[1]: Listening on docker.socket. Sep 13 00:07:28.841529 systemd[1]: Reached target sockets.target. Sep 13 00:07:28.842244 systemd[1]: Reached target basic.target. Sep 13 00:07:28.842833 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:07:28.842875 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:07:28.843894 systemd[1]: Starting containerd.service... Sep 13 00:07:28.845478 systemd[1]: Starting dbus.service... Sep 13 00:07:28.847029 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:07:28.848709 systemd[1]: Starting extend-filesystems.service... Sep 13 00:07:28.852216 jq[1194]: false Sep 13 00:07:28.849568 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:07:28.850661 systemd[1]: Starting motdgen.service... Sep 13 00:07:28.852307 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:07:28.854138 systemd[1]: Starting sshd-keygen.service... Sep 13 00:07:28.856876 systemd[1]: Starting systemd-logind.service... Sep 13 00:07:28.857453 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:07:28.857543 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:07:28.858880 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:07:28.859724 systemd[1]: Starting update-engine.service... Sep 13 00:07:28.861870 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:07:28.868828 jq[1208]: true Sep 13 00:07:28.864337 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:07:28.864522 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:07:28.864911 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:07:28.865060 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:07:28.872739 jq[1214]: true Sep 13 00:07:28.878435 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:07:28.878928 systemd[1]: Finished motdgen.service. Sep 13 00:07:28.883502 dbus-daemon[1193]: [system] SELinux support is enabled Sep 13 00:07:28.883719 systemd[1]: Started dbus.service. Sep 13 00:07:28.884112 extend-filesystems[1195]: Found loop1 Sep 13 00:07:28.886169 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:07:28.886194 systemd[1]: Reached target system-config.target. Sep 13 00:07:28.886434 extend-filesystems[1195]: Found vda Sep 13 00:07:28.886929 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:07:28.886944 systemd[1]: Reached target user-config.target. Sep 13 00:07:28.888029 extend-filesystems[1195]: Found vda1 Sep 13 00:07:28.888656 extend-filesystems[1195]: Found vda2 Sep 13 00:07:28.889289 extend-filesystems[1195]: Found vda3 Sep 13 00:07:28.890198 extend-filesystems[1195]: Found usr Sep 13 00:07:28.890198 extend-filesystems[1195]: Found vda4 Sep 13 00:07:28.890198 extend-filesystems[1195]: Found vda6 Sep 13 00:07:28.890198 extend-filesystems[1195]: Found vda7 Sep 13 00:07:28.890198 extend-filesystems[1195]: Found vda9 Sep 13 00:07:28.890198 extend-filesystems[1195]: Checking size of /dev/vda9 Sep 13 00:07:28.912370 extend-filesystems[1195]: Resized partition /dev/vda9 Sep 13 00:07:28.914811 extend-filesystems[1241]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:07:28.917047 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:07:28.917553 systemd-logind[1203]: New seat seat0. Sep 13 00:07:28.922524 bash[1236]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:07:28.927963 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:07:28.929387 update_engine[1207]: I0913 00:07:28.928210 1207 main.cc:92] Flatcar Update Engine starting Sep 13 00:07:28.932477 env[1215]: time="2025-09-13T00:07:28.932395480Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:07:28.935080 update_engine[1207]: I0913 00:07:28.932699 1207 update_check_scheduler.cc:74] Next update check in 5m28s Sep 13 00:07:28.935219 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:07:28.937675 systemd[1]: Started update-engine.service. Sep 13 00:07:28.944995 systemd[1]: Started systemd-logind.service. Sep 13 00:07:28.948235 systemd[1]: Started locksmithd.service. Sep 13 00:07:28.952887 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:07:28.962965 extend-filesystems[1241]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:07:28.962965 extend-filesystems[1241]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:07:28.962965 extend-filesystems[1241]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:07:28.968105 extend-filesystems[1195]: Resized filesystem in /dev/vda9 Sep 13 00:07:28.969047 env[1215]: time="2025-09-13T00:07:28.967991720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:07:28.969047 env[1215]: time="2025-09-13T00:07:28.968165680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:07:28.965209 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:07:28.965388 systemd[1]: Finished extend-filesystems.service. Sep 13 00:07:28.969398 env[1215]: time="2025-09-13T00:07:28.969333960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:07:28.969398 env[1215]: time="2025-09-13T00:07:28.969362400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:07:28.969627 env[1215]: time="2025-09-13T00:07:28.969588320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:07:28.969627 env[1215]: time="2025-09-13T00:07:28.969617200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:07:28.969682 env[1215]: time="2025-09-13T00:07:28.969631880Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:07:28.969682 env[1215]: time="2025-09-13T00:07:28.969641960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:07:28.969733 env[1215]: time="2025-09-13T00:07:28.969719600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:07:28.970072 env[1215]: time="2025-09-13T00:07:28.970041840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:07:28.970214 env[1215]: time="2025-09-13T00:07:28.970195120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:07:28.970237 env[1215]: time="2025-09-13T00:07:28.970216440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:07:28.970288 env[1215]: time="2025-09-13T00:07:28.970273520Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:07:28.970322 env[1215]: time="2025-09-13T00:07:28.970289840Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:07:28.973585 env[1215]: time="2025-09-13T00:07:28.973545280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:07:28.973659 env[1215]: time="2025-09-13T00:07:28.973589680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:07:28.973659 env[1215]: time="2025-09-13T00:07:28.973605160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:07:28.973659 env[1215]: time="2025-09-13T00:07:28.973636760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:07:28.973659 env[1215]: time="2025-09-13T00:07:28.973651120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:07:28.973748 env[1215]: time="2025-09-13T00:07:28.973665160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:07:28.973748 env[1215]: time="2025-09-13T00:07:28.973678560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:07:28.974109 env[1215]: time="2025-09-13T00:07:28.974086200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:07:28.974135 env[1215]: time="2025-09-13T00:07:28.974119440Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:07:28.974155 env[1215]: time="2025-09-13T00:07:28.974133640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:07:28.974155 env[1215]: time="2025-09-13T00:07:28.974146880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:07:28.974199 env[1215]: time="2025-09-13T00:07:28.974159960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:07:28.974299 env[1215]: time="2025-09-13T00:07:28.974282000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:07:28.974383 env[1215]: time="2025-09-13T00:07:28.974368280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:07:28.974705 env[1215]: time="2025-09-13T00:07:28.974677840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:07:28.974796 env[1215]: time="2025-09-13T00:07:28.974723440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:07:28.974796 env[1215]: time="2025-09-13T00:07:28.974737760Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:07:28.975058 env[1215]: time="2025-09-13T00:07:28.974877640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:07:28.975058 env[1215]: time="2025-09-13T00:07:28.974893560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:07:28.975058 env[1215]: time="2025-09-13T00:07:28.974906240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:07:28.975058 env[1215]: time="2025-09-13T00:07:28.974919360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:07:28.975058 env[1215]: time="2025-09-13T00:07:28.974931240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:07:28.975058 env[1215]: time="2025-09-13T00:07:28.974943520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:07:28.975058 env[1215]: time="2025-09-13T00:07:28.974954560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:07:28.975058 env[1215]: time="2025-09-13T00:07:28.974965840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:07:28.975058 env[1215]: time="2025-09-13T00:07:28.974979800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:07:28.975270 env[1215]: time="2025-09-13T00:07:28.975116200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:07:28.975270 env[1215]: time="2025-09-13T00:07:28.975132880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:07:28.975270 env[1215]: time="2025-09-13T00:07:28.975146520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:07:28.975270 env[1215]: time="2025-09-13T00:07:28.975158240Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:07:28.975270 env[1215]: time="2025-09-13T00:07:28.975171720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:07:28.975270 env[1215]: time="2025-09-13T00:07:28.975183720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:07:28.975270 env[1215]: time="2025-09-13T00:07:28.975201360Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:07:28.975270 env[1215]: time="2025-09-13T00:07:28.975233520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:07:28.975607 env[1215]: time="2025-09-13T00:07:28.975419080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:07:28.975607 env[1215]: time="2025-09-13T00:07:28.975478400Z" level=info msg="Connect containerd service" Sep 13 00:07:28.975607 env[1215]: time="2025-09-13T00:07:28.975531600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:07:28.976652 env[1215]: time="2025-09-13T00:07:28.976160760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:07:28.976652 env[1215]: time="2025-09-13T00:07:28.976401360Z" level=info msg="Start subscribing containerd event" Sep 13 00:07:28.976652 env[1215]: time="2025-09-13T00:07:28.976464800Z" level=info msg="Start recovering state" Sep 13 00:07:28.976652 env[1215]: time="2025-09-13T00:07:28.976484240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:07:28.976652 env[1215]: time="2025-09-13T00:07:28.976522480Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:07:28.976652 env[1215]: time="2025-09-13T00:07:28.976569600Z" level=info msg="containerd successfully booted in 0.047201s" Sep 13 00:07:28.976663 systemd[1]: Started containerd.service. Sep 13 00:07:28.977864 env[1215]: time="2025-09-13T00:07:28.977809920Z" level=info msg="Start event monitor" Sep 13 00:07:28.977899 env[1215]: time="2025-09-13T00:07:28.977872280Z" level=info msg="Start snapshots syncer" Sep 13 00:07:28.977899 env[1215]: time="2025-09-13T00:07:28.977885440Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:07:28.977899 env[1215]: time="2025-09-13T00:07:28.977896080Z" level=info msg="Start streaming server" Sep 13 00:07:28.994414 locksmithd[1245]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:07:29.454022 systemd-networkd[1043]: eth0: Gained IPv6LL Sep 13 00:07:29.455767 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:07:29.457123 systemd[1]: Reached target network-online.target. Sep 13 00:07:29.459679 systemd[1]: Starting kubelet.service... Sep 13 00:07:30.056007 systemd[1]: Started kubelet.service. Sep 13 00:07:30.470095 sshd_keygen[1209]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:07:30.489903 systemd[1]: Finished sshd-keygen.service. Sep 13 00:07:30.492449 systemd[1]: Starting issuegen.service... Sep 13 00:07:30.495602 kubelet[1258]: E0913 00:07:30.495544 1258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:07:30.497485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:07:30.497598 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:07:30.498582 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:07:30.498738 systemd[1]: Finished issuegen.service. Sep 13 00:07:30.501014 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:07:30.507972 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:07:30.510247 systemd[1]: Started getty@tty1.service. Sep 13 00:07:30.512243 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 13 00:07:30.513128 systemd[1]: Reached target getty.target. Sep 13 00:07:30.513838 systemd[1]: Reached target multi-user.target. Sep 13 00:07:30.516033 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:07:30.524421 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:07:30.524607 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:07:30.526030 systemd[1]: Startup finished in 577ms (kernel) + 4.018s (initrd) + 4.954s (userspace) = 9.551s. Sep 13 00:07:33.759514 systemd[1]: Created slice system-sshd.slice. Sep 13 00:07:33.760712 systemd[1]: Started sshd@0-10.0.0.29:22-10.0.0.1:49728.service. Sep 13 00:07:33.812996 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 49728 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:07:33.820686 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:33.839198 systemd[1]: Created slice user-500.slice. Sep 13 00:07:33.840335 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:07:33.842158 systemd-logind[1203]: New session 1 of user core. Sep 13 00:07:33.852957 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:07:33.855703 systemd[1]: Starting user@500.service... Sep 13 00:07:33.861526 (systemd)[1283]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:33.933042 systemd[1283]: Queued start job for default target default.target. Sep 13 00:07:33.933906 systemd[1283]: Reached target paths.target. Sep 13 00:07:33.933945 systemd[1283]: Reached target sockets.target. Sep 13 00:07:33.933957 systemd[1283]: Reached target timers.target. Sep 13 00:07:33.933967 systemd[1283]: Reached target basic.target. Sep 13 00:07:33.934162 systemd[1283]: Reached target default.target. Sep 13 00:07:33.934193 systemd[1]: Started user@500.service. Sep 13 00:07:33.934209 systemd[1283]: Startup finished in 65ms. Sep 13 00:07:33.935945 systemd[1]: Started session-1.scope. Sep 13 00:07:33.990868 systemd[1]: Started sshd@1-10.0.0.29:22-10.0.0.1:49740.service. Sep 13 00:07:34.037930 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 49740 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:07:34.039253 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:34.045515 systemd-logind[1203]: New session 2 of user core. Sep 13 00:07:34.046761 systemd[1]: Started session-2.scope. Sep 13 00:07:34.104046 sshd[1292]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:34.107016 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:07:34.107174 systemd[1]: sshd@1-10.0.0.29:22-10.0.0.1:49740.service: Deactivated successfully. Sep 13 00:07:34.107888 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:07:34.109343 systemd[1]: Started sshd@2-10.0.0.29:22-10.0.0.1:49752.service. Sep 13 00:07:34.109845 systemd-logind[1203]: Removed session 2. Sep 13 00:07:34.148171 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 49752 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:07:34.149627 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:34.153393 systemd-logind[1203]: New session 3 of user core. Sep 13 00:07:34.154244 systemd[1]: Started session-3.scope. Sep 13 00:07:34.208431 sshd[1298]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:34.211296 systemd[1]: sshd@2-10.0.0.29:22-10.0.0.1:49752.service: Deactivated successfully. Sep 13 00:07:34.212013 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:07:34.212505 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:07:34.213643 systemd[1]: Started sshd@3-10.0.0.29:22-10.0.0.1:49768.service. Sep 13 00:07:34.214291 systemd-logind[1203]: Removed session 3. Sep 13 00:07:34.253402 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 49768 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:07:34.254792 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:34.258357 systemd-logind[1203]: New session 4 of user core. Sep 13 00:07:34.259196 systemd[1]: Started session-4.scope. Sep 13 00:07:34.313608 sshd[1305]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:34.316659 systemd[1]: sshd@3-10.0.0.29:22-10.0.0.1:49768.service: Deactivated successfully. Sep 13 00:07:34.317341 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:07:34.317865 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:07:34.318968 systemd[1]: Started sshd@4-10.0.0.29:22-10.0.0.1:49784.service. Sep 13 00:07:34.319656 systemd-logind[1203]: Removed session 4. Sep 13 00:07:34.361254 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 49784 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:07:34.362895 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:34.366491 systemd-logind[1203]: New session 5 of user core. Sep 13 00:07:34.367335 systemd[1]: Started session-5.scope. Sep 13 00:07:34.431423 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:07:34.431644 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:07:34.443626 systemd[1]: Starting coreos-metadata.service... Sep 13 00:07:34.450252 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:07:34.450426 systemd[1]: Finished coreos-metadata.service. Sep 13 00:07:34.957879 systemd[1]: Stopped kubelet.service. Sep 13 00:07:34.960689 systemd[1]: Starting kubelet.service... Sep 13 00:07:34.985037 systemd[1]: Reloading. Sep 13 00:07:35.042914 /usr/lib/systemd/system-generators/torcx-generator[1375]: time="2025-09-13T00:07:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:07:35.043273 /usr/lib/systemd/system-generators/torcx-generator[1375]: time="2025-09-13T00:07:35Z" level=info msg="torcx already run" Sep 13 00:07:35.237442 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:07:35.237465 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:07:35.253896 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:35.333266 systemd[1]: Started kubelet.service. Sep 13 00:07:35.335185 systemd[1]: Stopping kubelet.service... Sep 13 00:07:35.335588 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:07:35.335801 systemd[1]: Stopped kubelet.service. Sep 13 00:07:35.337708 systemd[1]: Starting kubelet.service... Sep 13 00:07:35.434467 systemd[1]: Started kubelet.service. Sep 13 00:07:35.467494 kubelet[1420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:35.467494 kubelet[1420]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:07:35.467494 kubelet[1420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:35.467948 kubelet[1420]: I0913 00:07:35.467532 1420 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:07:36.404545 kubelet[1420]: I0913 00:07:36.404495 1420 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:07:36.404545 kubelet[1420]: I0913 00:07:36.404533 1420 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:07:36.404770 kubelet[1420]: I0913 00:07:36.404745 1420 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:07:36.428369 kubelet[1420]: I0913 00:07:36.428318 1420 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:07:36.450538 kubelet[1420]: E0913 00:07:36.450482 1420 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:07:36.450538 kubelet[1420]: I0913 00:07:36.450538 1420 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:07:36.453115 kubelet[1420]: I0913 00:07:36.453083 1420 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:07:36.453402 kubelet[1420]: I0913 00:07:36.453371 1420 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:07:36.453547 kubelet[1420]: I0913 00:07:36.453396 1420 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.29","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:07:36.453637 kubelet[1420]: I0913 00:07:36.453608 1420 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:07:36.453637 kubelet[1420]: I0913 00:07:36.453618 1420 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:07:36.453807 kubelet[1420]: I0913 00:07:36.453792 1420 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:36.456423 kubelet[1420]: I0913 00:07:36.456401 1420 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:07:36.456486 kubelet[1420]: I0913 00:07:36.456429 1420 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:07:36.458832 kubelet[1420]: I0913 00:07:36.458811 1420 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:07:36.459915 kubelet[1420]: I0913 00:07:36.459895 1420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:07:36.459915 kubelet[1420]: E0913 00:07:36.459904 1420 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:36.460043 kubelet[1420]: E0913 00:07:36.460019 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:36.461044 kubelet[1420]: I0913 00:07:36.461018 1420 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:07:36.461765 kubelet[1420]: I0913 00:07:36.461727 1420 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:07:36.461880 kubelet[1420]: W0913 00:07:36.461869 1420 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:07:36.464527 kubelet[1420]: I0913 00:07:36.464504 1420 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:07:36.464587 kubelet[1420]: I0913 00:07:36.464554 1420 server.go:1289] "Started kubelet" Sep 13 00:07:36.465747 kubelet[1420]: I0913 00:07:36.465690 1420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:07:36.466099 kubelet[1420]: I0913 00:07:36.466079 1420 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:07:36.466236 kubelet[1420]: I0913 00:07:36.466215 1420 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:07:36.466682 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:07:36.466880 kubelet[1420]: I0913 00:07:36.466823 1420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:07:36.467479 kubelet[1420]: I0913 00:07:36.467457 1420 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:07:36.469580 kubelet[1420]: I0913 00:07:36.469533 1420 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:07:36.472086 kubelet[1420]: E0913 00:07:36.472062 1420 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:07:36.473130 kubelet[1420]: E0913 00:07:36.473098 1420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.29\" not found" Sep 13 00:07:36.473253 kubelet[1420]: I0913 00:07:36.473239 1420 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:07:36.473631 kubelet[1420]: I0913 00:07:36.473612 1420 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:07:36.473775 kubelet[1420]: I0913 00:07:36.473762 1420 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:07:36.474453 kubelet[1420]: I0913 00:07:36.474428 1420 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:07:36.474665 kubelet[1420]: I0913 00:07:36.474642 1420 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:07:36.476894 kubelet[1420]: I0913 00:07:36.476829 1420 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:07:36.481562 kubelet[1420]: E0913 00:07:36.481524 1420 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.29\" not found" node="10.0.0.29" Sep 13 00:07:36.491569 kubelet[1420]: I0913 00:07:36.491461 1420 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:07:36.491569 kubelet[1420]: I0913 00:07:36.491478 1420 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:07:36.491569 kubelet[1420]: I0913 00:07:36.491504 1420 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:36.573660 kubelet[1420]: E0913 00:07:36.573595 1420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.29\" not found" Sep 13 00:07:36.582076 kubelet[1420]: I0913 00:07:36.582023 1420 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:07:36.582076 kubelet[1420]: I0913 00:07:36.582054 1420 policy_none.go:49] "None policy: Start" Sep 13 00:07:36.582172 kubelet[1420]: I0913 00:07:36.582086 1420 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:07:36.582172 kubelet[1420]: I0913 00:07:36.582100 1420 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:07:36.586936 systemd[1]: Created slice kubepods.slice. Sep 13 00:07:36.592166 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 00:07:36.595291 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 00:07:36.605784 kubelet[1420]: E0913 00:07:36.605733 1420 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:07:36.606122 kubelet[1420]: I0913 00:07:36.605996 1420 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:07:36.606122 kubelet[1420]: I0913 00:07:36.606015 1420 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:07:36.606307 kubelet[1420]: I0913 00:07:36.606254 1420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:07:36.607006 kubelet[1420]: E0913 00:07:36.606945 1420 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:07:36.607006 kubelet[1420]: E0913 00:07:36.606992 1420 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.29\" not found" Sep 13 00:07:36.647109 kubelet[1420]: I0913 00:07:36.647072 1420 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:07:36.647269 kubelet[1420]: I0913 00:07:36.647255 1420 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:07:36.647363 kubelet[1420]: I0913 00:07:36.647335 1420 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:07:36.647433 kubelet[1420]: I0913 00:07:36.647422 1420 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:07:36.647539 kubelet[1420]: E0913 00:07:36.647522 1420 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 13 00:07:36.708000 kubelet[1420]: I0913 00:07:36.707889 1420 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.29" Sep 13 00:07:36.713550 kubelet[1420]: I0913 00:07:36.713515 1420 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.29" Sep 13 00:07:36.824957 kubelet[1420]: I0913 00:07:36.824922 1420 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 13 00:07:36.825273 env[1215]: time="2025-09-13T00:07:36.825219907Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:07:36.825524 kubelet[1420]: I0913 00:07:36.825417 1420 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 13 00:07:36.965871 sudo[1314]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:36.967870 sshd[1311]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:36.973563 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:07:36.974180 systemd[1]: sshd@4-10.0.0.29:22-10.0.0.1:49784.service: Deactivated successfully. Sep 13 00:07:36.979720 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:07:36.980341 systemd-logind[1203]: Removed session 5. Sep 13 00:07:37.406441 kubelet[1420]: I0913 00:07:37.406384 1420 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 13 00:07:37.406632 kubelet[1420]: I0913 00:07:37.406549 1420 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Sep 13 00:07:37.406632 kubelet[1420]: I0913 00:07:37.406589 1420 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Sep 13 00:07:37.406632 kubelet[1420]: I0913 00:07:37.406621 1420 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Sep 13 00:07:37.460580 kubelet[1420]: I0913 00:07:37.460530 1420 apiserver.go:52] "Watching apiserver" Sep 13 00:07:37.460783 kubelet[1420]: E0913 00:07:37.460761 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:37.474763 systemd[1]: Created slice kubepods-besteffort-pod34f8acd9_7f58_4a95_adc6_596cd46f7a9e.slice. Sep 13 00:07:37.475784 kubelet[1420]: I0913 00:07:37.475737 1420 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:07:37.479794 kubelet[1420]: I0913 00:07:37.479763 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cilium-cgroup\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.479858 kubelet[1420]: I0913 00:07:37.479796 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cni-path\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.479858 kubelet[1420]: I0913 00:07:37.479815 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-xtables-lock\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.479858 kubelet[1420]: I0913 00:07:37.479830 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-host-proc-sys-kernel\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.479944 kubelet[1420]: I0913 00:07:37.479860 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8aeb6b79-8d41-4c0b-9365-90ec4d029386-hubble-tls\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.479944 kubelet[1420]: I0913 00:07:37.479887 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4cnv\" (UniqueName: \"kubernetes.io/projected/8aeb6b79-8d41-4c0b-9365-90ec4d029386-kube-api-access-p4cnv\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.479944 kubelet[1420]: I0913 00:07:37.479925 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34f8acd9-7f58-4a95-adc6-596cd46f7a9e-xtables-lock\") pod \"kube-proxy-7fzhv\" (UID: \"34f8acd9-7f58-4a95-adc6-596cd46f7a9e\") " pod="kube-system/kube-proxy-7fzhv" Sep 13 00:07:37.479944 kubelet[1420]: I0913 00:07:37.479939 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cilium-run\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.480024 kubelet[1420]: I0913 00:07:37.479958 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-hostproc\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.480024 kubelet[1420]: I0913 00:07:37.479981 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-etc-cni-netd\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.480024 kubelet[1420]: I0913 00:07:37.479995 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-lib-modules\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.480024 kubelet[1420]: I0913 00:07:37.480016 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8aeb6b79-8d41-4c0b-9365-90ec4d029386-clustermesh-secrets\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.480160 kubelet[1420]: I0913 00:07:37.480054 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cilium-config-path\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.480160 kubelet[1420]: I0913 00:07:37.480088 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/34f8acd9-7f58-4a95-adc6-596cd46f7a9e-kube-proxy\") pod \"kube-proxy-7fzhv\" (UID: \"34f8acd9-7f58-4a95-adc6-596cd46f7a9e\") " pod="kube-system/kube-proxy-7fzhv" Sep 13 00:07:37.480160 kubelet[1420]: I0913 00:07:37.480104 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34f8acd9-7f58-4a95-adc6-596cd46f7a9e-lib-modules\") pod \"kube-proxy-7fzhv\" (UID: \"34f8acd9-7f58-4a95-adc6-596cd46f7a9e\") " pod="kube-system/kube-proxy-7fzhv" Sep 13 00:07:37.480160 kubelet[1420]: I0913 00:07:37.480128 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-host-proc-sys-net\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.480160 kubelet[1420]: I0913 00:07:37.480152 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-bpf-maps\") pod \"cilium-z8wbh\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " pod="kube-system/cilium-z8wbh" Sep 13 00:07:37.480269 kubelet[1420]: I0913 00:07:37.480188 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxfsx\" (UniqueName: \"kubernetes.io/projected/34f8acd9-7f58-4a95-adc6-596cd46f7a9e-kube-api-access-qxfsx\") pod \"kube-proxy-7fzhv\" (UID: \"34f8acd9-7f58-4a95-adc6-596cd46f7a9e\") " pod="kube-system/kube-proxy-7fzhv" Sep 13 00:07:37.494829 systemd[1]: Created slice kubepods-burstable-pod8aeb6b79_8d41_4c0b_9365_90ec4d029386.slice. Sep 13 00:07:37.581438 kubelet[1420]: I0913 00:07:37.581403 1420 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:07:37.792741 kubelet[1420]: E0913 00:07:37.792699 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:37.793790 env[1215]: time="2025-09-13T00:07:37.793452463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7fzhv,Uid:34f8acd9-7f58-4a95-adc6-596cd46f7a9e,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:37.806582 kubelet[1420]: E0913 00:07:37.806545 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:37.807042 env[1215]: time="2025-09-13T00:07:37.807007251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z8wbh,Uid:8aeb6b79-8d41-4c0b-9365-90ec4d029386,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:38.408992 env[1215]: time="2025-09-13T00:07:38.408942925Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:38.410727 env[1215]: time="2025-09-13T00:07:38.410690586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:38.412283 env[1215]: time="2025-09-13T00:07:38.412205124Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:38.413719 env[1215]: time="2025-09-13T00:07:38.413691534Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:38.414480 env[1215]: time="2025-09-13T00:07:38.414440738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:38.416453 env[1215]: time="2025-09-13T00:07:38.416425695Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:38.417265 env[1215]: time="2025-09-13T00:07:38.417236851Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:38.419595 env[1215]: time="2025-09-13T00:07:38.419565875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:38.437395 env[1215]: time="2025-09-13T00:07:38.437191906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:38.437535 env[1215]: time="2025-09-13T00:07:38.437484132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:38.437535 env[1215]: time="2025-09-13T00:07:38.437517155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:38.437596 env[1215]: time="2025-09-13T00:07:38.437528229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:38.437723 env[1215]: time="2025-09-13T00:07:38.437697434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:38.437779 env[1215]: time="2025-09-13T00:07:38.437735954Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/963a237dc293ac74c24a8e1ca4f6c7f8fee3a44bb82d4ef6edeb81331cfc7f03 pid=1491 runtime=io.containerd.runc.v2 Sep 13 00:07:38.437876 env[1215]: time="2025-09-13T00:07:38.437830366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:38.438154 env[1215]: time="2025-09-13T00:07:38.438099080Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a pid=1490 runtime=io.containerd.runc.v2 Sep 13 00:07:38.452885 systemd[1]: Started cri-containerd-0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a.scope. Sep 13 00:07:38.455226 systemd[1]: Started cri-containerd-963a237dc293ac74c24a8e1ca4f6c7f8fee3a44bb82d4ef6edeb81331cfc7f03.scope. Sep 13 00:07:38.461915 kubelet[1420]: E0913 00:07:38.461875 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:38.489884 env[1215]: time="2025-09-13T00:07:38.489817216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z8wbh,Uid:8aeb6b79-8d41-4c0b-9365-90ec4d029386,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\"" Sep 13 00:07:38.490003 env[1215]: time="2025-09-13T00:07:38.489963911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7fzhv,Uid:34f8acd9-7f58-4a95-adc6-596cd46f7a9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"963a237dc293ac74c24a8e1ca4f6c7f8fee3a44bb82d4ef6edeb81331cfc7f03\"" Sep 13 00:07:38.491251 kubelet[1420]: E0913 00:07:38.491051 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:38.491251 kubelet[1420]: E0913 00:07:38.491130 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:38.492409 env[1215]: time="2025-09-13T00:07:38.492376594Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:07:38.587554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2181519410.mount: Deactivated successfully. Sep 13 00:07:39.462787 kubelet[1420]: E0913 00:07:39.462722 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:39.543235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167599377.mount: Deactivated successfully. Sep 13 00:07:40.028871 env[1215]: time="2025-09-13T00:07:40.028766561Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:40.030166 env[1215]: time="2025-09-13T00:07:40.030125831Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:40.032733 env[1215]: time="2025-09-13T00:07:40.032683229Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:40.037155 env[1215]: time="2025-09-13T00:07:40.037103013Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:40.037515 env[1215]: time="2025-09-13T00:07:40.037476941Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 13 00:07:40.038969 env[1215]: time="2025-09-13T00:07:40.038607548Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:07:40.042836 env[1215]: time="2025-09-13T00:07:40.042793696Z" level=info msg="CreateContainer within sandbox \"963a237dc293ac74c24a8e1ca4f6c7f8fee3a44bb82d4ef6edeb81331cfc7f03\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:07:40.058309 env[1215]: time="2025-09-13T00:07:40.058237719Z" level=info msg="CreateContainer within sandbox \"963a237dc293ac74c24a8e1ca4f6c7f8fee3a44bb82d4ef6edeb81331cfc7f03\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"49771563c3f013ce9b5e13ee0e365b4e29f299af5e009c99858f91811b74a3e4\"" Sep 13 00:07:40.059336 env[1215]: time="2025-09-13T00:07:40.059302409Z" level=info msg="StartContainer for \"49771563c3f013ce9b5e13ee0e365b4e29f299af5e009c99858f91811b74a3e4\"" Sep 13 00:07:40.076273 systemd[1]: Started cri-containerd-49771563c3f013ce9b5e13ee0e365b4e29f299af5e009c99858f91811b74a3e4.scope. Sep 13 00:07:40.113358 env[1215]: time="2025-09-13T00:07:40.113310679Z" level=info msg="StartContainer for \"49771563c3f013ce9b5e13ee0e365b4e29f299af5e009c99858f91811b74a3e4\" returns successfully" Sep 13 00:07:40.463814 kubelet[1420]: E0913 00:07:40.463779 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:40.657498 kubelet[1420]: E0913 00:07:40.657469 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:40.698335 kubelet[1420]: I0913 00:07:40.698267 1420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7fzhv" podStartSLOduration=3.15176579 podStartE2EDuration="4.698251346s" podCreationTimestamp="2025-09-13 00:07:36 +0000 UTC" firstStartedPulling="2025-09-13 00:07:38.491994048 +0000 UTC m=+3.053589375" lastFinishedPulling="2025-09-13 00:07:40.038479604 +0000 UTC m=+4.600074931" observedRunningTime="2025-09-13 00:07:40.698114421 +0000 UTC m=+5.259709748" watchObservedRunningTime="2025-09-13 00:07:40.698251346 +0000 UTC m=+5.259846673" Sep 13 00:07:41.464397 kubelet[1420]: E0913 00:07:41.464322 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:41.659301 kubelet[1420]: E0913 00:07:41.659207 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:42.465447 kubelet[1420]: E0913 00:07:42.465379 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:43.466281 kubelet[1420]: E0913 00:07:43.466074 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:44.466581 kubelet[1420]: E0913 00:07:44.466536 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:45.467538 kubelet[1420]: E0913 00:07:45.467476 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:45.795135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4158817352.mount: Deactivated successfully. Sep 13 00:07:46.468396 kubelet[1420]: E0913 00:07:46.468357 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:47.469419 kubelet[1420]: E0913 00:07:47.469387 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:48.037090 env[1215]: time="2025-09-13T00:07:48.037029987Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:48.041422 env[1215]: time="2025-09-13T00:07:48.041364852Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:48.043619 env[1215]: time="2025-09-13T00:07:48.043584147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:48.044175 env[1215]: time="2025-09-13T00:07:48.044145166Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 13 00:07:48.051864 env[1215]: time="2025-09-13T00:07:48.051801548Z" level=info msg="CreateContainer within sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:07:48.067301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1123376935.mount: Deactivated successfully. Sep 13 00:07:48.074124 env[1215]: time="2025-09-13T00:07:48.074068960Z" level=info msg="CreateContainer within sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd\"" Sep 13 00:07:48.074740 env[1215]: time="2025-09-13T00:07:48.074705200Z" level=info msg="StartContainer for \"e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd\"" Sep 13 00:07:48.091934 systemd[1]: Started cri-containerd-e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd.scope. Sep 13 00:07:48.128102 env[1215]: time="2025-09-13T00:07:48.128042623Z" level=info msg="StartContainer for \"e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd\" returns successfully" Sep 13 00:07:48.132422 systemd[1]: cri-containerd-e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd.scope: Deactivated successfully. Sep 13 00:07:48.363373 env[1215]: time="2025-09-13T00:07:48.363257996Z" level=info msg="shim disconnected" id=e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd Sep 13 00:07:48.363373 env[1215]: time="2025-09-13T00:07:48.363307676Z" level=warning msg="cleaning up after shim disconnected" id=e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd namespace=k8s.io Sep 13 00:07:48.363373 env[1215]: time="2025-09-13T00:07:48.363318405Z" level=info msg="cleaning up dead shim" Sep 13 00:07:48.370897 env[1215]: time="2025-09-13T00:07:48.370838195Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1781 runtime=io.containerd.runc.v2\n" Sep 13 00:07:48.470722 kubelet[1420]: E0913 00:07:48.470680 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:48.670416 kubelet[1420]: E0913 00:07:48.670273 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:48.676293 env[1215]: time="2025-09-13T00:07:48.676248978Z" level=info msg="CreateContainer within sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:07:48.690970 env[1215]: time="2025-09-13T00:07:48.690921699Z" level=info msg="CreateContainer within sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0\"" Sep 13 00:07:48.691617 env[1215]: time="2025-09-13T00:07:48.691594489Z" level=info msg="StartContainer for \"0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0\"" Sep 13 00:07:48.708536 systemd[1]: Started cri-containerd-0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0.scope. Sep 13 00:07:48.740722 env[1215]: time="2025-09-13T00:07:48.740670386Z" level=info msg="StartContainer for \"0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0\" returns successfully" Sep 13 00:07:48.751609 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:07:48.751810 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:07:48.751988 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:07:48.753650 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:07:48.755593 systemd[1]: cri-containerd-0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0.scope: Deactivated successfully. Sep 13 00:07:48.763069 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:07:48.777933 env[1215]: time="2025-09-13T00:07:48.777888866Z" level=info msg="shim disconnected" id=0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0 Sep 13 00:07:48.778194 env[1215]: time="2025-09-13T00:07:48.778166973Z" level=warning msg="cleaning up after shim disconnected" id=0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0 namespace=k8s.io Sep 13 00:07:48.778270 env[1215]: time="2025-09-13T00:07:48.778256606Z" level=info msg="cleaning up dead shim" Sep 13 00:07:48.786119 env[1215]: time="2025-09-13T00:07:48.786079885Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1844 runtime=io.containerd.runc.v2\n" Sep 13 00:07:49.062941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd-rootfs.mount: Deactivated successfully. Sep 13 00:07:49.471330 kubelet[1420]: E0913 00:07:49.471229 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:49.676261 kubelet[1420]: E0913 00:07:49.676229 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:49.682142 env[1215]: time="2025-09-13T00:07:49.682100595Z" level=info msg="CreateContainer within sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:07:49.694636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73656848.mount: Deactivated successfully. Sep 13 00:07:49.702504 env[1215]: time="2025-09-13T00:07:49.702448398Z" level=info msg="CreateContainer within sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78\"" Sep 13 00:07:49.703286 env[1215]: time="2025-09-13T00:07:49.703165071Z" level=info msg="StartContainer for \"e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78\"" Sep 13 00:07:49.724627 systemd[1]: Started cri-containerd-e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78.scope. Sep 13 00:07:49.761907 systemd[1]: cri-containerd-e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78.scope: Deactivated successfully. Sep 13 00:07:49.774532 env[1215]: time="2025-09-13T00:07:49.774450610Z" level=info msg="StartContainer for \"e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78\" returns successfully" Sep 13 00:07:49.832557 env[1215]: time="2025-09-13T00:07:49.832512885Z" level=info msg="shim disconnected" id=e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78 Sep 13 00:07:49.833077 env[1215]: time="2025-09-13T00:07:49.833055874Z" level=warning msg="cleaning up after shim disconnected" id=e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78 namespace=k8s.io Sep 13 00:07:49.833182 env[1215]: time="2025-09-13T00:07:49.833167394Z" level=info msg="cleaning up dead shim" Sep 13 00:07:49.841370 env[1215]: time="2025-09-13T00:07:49.841336160Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1902 runtime=io.containerd.runc.v2\n" Sep 13 00:07:50.062669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78-rootfs.mount: Deactivated successfully. Sep 13 00:07:50.471933 kubelet[1420]: E0913 00:07:50.471794 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:50.680044 kubelet[1420]: E0913 00:07:50.680014 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:50.751905 env[1215]: time="2025-09-13T00:07:50.751787235Z" level=info msg="CreateContainer within sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:07:51.028034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1868980357.mount: Deactivated successfully. Sep 13 00:07:51.032895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459176618.mount: Deactivated successfully. Sep 13 00:07:51.174867 env[1215]: time="2025-09-13T00:07:51.174006334Z" level=info msg="CreateContainer within sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d\"" Sep 13 00:07:51.175105 env[1215]: time="2025-09-13T00:07:51.175059351Z" level=info msg="StartContainer for \"526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d\"" Sep 13 00:07:51.203732 systemd[1]: run-containerd-runc-k8s.io-526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d-runc.b92Pct.mount: Deactivated successfully. Sep 13 00:07:51.209648 systemd[1]: Started cri-containerd-526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d.scope. Sep 13 00:07:51.246522 systemd[1]: cri-containerd-526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d.scope: Deactivated successfully. Sep 13 00:07:51.247532 env[1215]: time="2025-09-13T00:07:51.247374464Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8aeb6b79_8d41_4c0b_9365_90ec4d029386.slice/cri-containerd-526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d.scope/memory.events\": no such file or directory" Sep 13 00:07:51.250610 env[1215]: time="2025-09-13T00:07:51.250554447Z" level=info msg="StartContainer for \"526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d\" returns successfully" Sep 13 00:07:51.266222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d-rootfs.mount: Deactivated successfully. Sep 13 00:07:51.272895 env[1215]: time="2025-09-13T00:07:51.272813526Z" level=info msg="shim disconnected" id=526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d Sep 13 00:07:51.273030 env[1215]: time="2025-09-13T00:07:51.272917102Z" level=warning msg="cleaning up after shim disconnected" id=526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d namespace=k8s.io Sep 13 00:07:51.273030 env[1215]: time="2025-09-13T00:07:51.272928188Z" level=info msg="cleaning up dead shim" Sep 13 00:07:51.279776 env[1215]: time="2025-09-13T00:07:51.279647911Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1960 runtime=io.containerd.runc.v2\n" Sep 13 00:07:51.472980 kubelet[1420]: E0913 00:07:51.472920 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:51.683140 kubelet[1420]: E0913 00:07:51.682952 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:51.687468 env[1215]: time="2025-09-13T00:07:51.687426554Z" level=info msg="CreateContainer within sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:07:51.708007 env[1215]: time="2025-09-13T00:07:51.707960808Z" level=info msg="CreateContainer within sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\"" Sep 13 00:07:51.708923 env[1215]: time="2025-09-13T00:07:51.708890358Z" level=info msg="StartContainer for \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\"" Sep 13 00:07:51.723066 systemd[1]: Started cri-containerd-34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5.scope. Sep 13 00:07:51.760939 env[1215]: time="2025-09-13T00:07:51.760888735Z" level=info msg="StartContainer for \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\" returns successfully" Sep 13 00:07:51.902883 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 00:07:51.920168 kubelet[1420]: I0913 00:07:51.920133 1420 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:07:52.133875 kernel: Initializing XFRM netlink socket Sep 13 00:07:52.136869 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 00:07:52.473865 kubelet[1420]: E0913 00:07:52.473690 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:52.687897 kubelet[1420]: E0913 00:07:52.687829 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:53.034503 kubelet[1420]: I0913 00:07:53.034443 1420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z8wbh" podStartSLOduration=7.481581604 podStartE2EDuration="17.034423092s" podCreationTimestamp="2025-09-13 00:07:36 +0000 UTC" firstStartedPulling="2025-09-13 00:07:38.49214949 +0000 UTC m=+3.053744817" lastFinishedPulling="2025-09-13 00:07:48.044990978 +0000 UTC m=+12.606586305" observedRunningTime="2025-09-13 00:07:52.717202352 +0000 UTC m=+17.278797679" watchObservedRunningTime="2025-09-13 00:07:53.034423092 +0000 UTC m=+17.596018419" Sep 13 00:07:53.042330 systemd[1]: Created slice kubepods-besteffort-poda06c94a1_c1d9_4f01_a317_58aabe077082.slice. Sep 13 00:07:53.082439 kubelet[1420]: I0913 00:07:53.082401 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb4kk\" (UniqueName: \"kubernetes.io/projected/a06c94a1-c1d9-4f01-a317-58aabe077082-kube-api-access-wb4kk\") pod \"nginx-deployment-7fcdb87857-n4766\" (UID: \"a06c94a1-c1d9-4f01-a317-58aabe077082\") " pod="default/nginx-deployment-7fcdb87857-n4766" Sep 13 00:07:53.345133 env[1215]: time="2025-09-13T00:07:53.345094547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-n4766,Uid:a06c94a1-c1d9-4f01-a317-58aabe077082,Namespace:default,Attempt:0,}" Sep 13 00:07:53.474337 kubelet[1420]: E0913 00:07:53.474287 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:53.689748 kubelet[1420]: E0913 00:07:53.689434 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:53.761696 systemd-networkd[1043]: cilium_host: Link UP Sep 13 00:07:53.763371 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:07:53.763448 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:07:53.762459 systemd-networkd[1043]: cilium_net: Link UP Sep 13 00:07:53.762815 systemd-networkd[1043]: cilium_net: Gained carrier Sep 13 00:07:53.763718 systemd-networkd[1043]: cilium_host: Gained carrier Sep 13 00:07:53.864760 systemd-networkd[1043]: cilium_vxlan: Link UP Sep 13 00:07:53.864766 systemd-networkd[1043]: cilium_vxlan: Gained carrier Sep 13 00:07:54.135182 systemd-networkd[1043]: cilium_host: Gained IPv6LL Sep 13 00:07:54.141867 kernel: NET: Registered PF_ALG protocol family Sep 13 00:07:54.475330 kubelet[1420]: E0913 00:07:54.475225 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:54.542295 systemd-networkd[1043]: cilium_net: Gained IPv6LL Sep 13 00:07:54.690827 kubelet[1420]: E0913 00:07:54.690779 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:54.774286 systemd-networkd[1043]: lxc_health: Link UP Sep 13 00:07:54.785357 systemd-networkd[1043]: lxc_health: Gained carrier Sep 13 00:07:54.785887 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:07:54.885301 systemd-networkd[1043]: lxcc1a6f644f444: Link UP Sep 13 00:07:54.894871 kernel: eth0: renamed from tmp687d6 Sep 13 00:07:54.902878 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc1a6f644f444: link becomes ready Sep 13 00:07:54.902943 systemd-networkd[1043]: lxcc1a6f644f444: Gained carrier Sep 13 00:07:55.182122 systemd-networkd[1043]: cilium_vxlan: Gained IPv6LL Sep 13 00:07:55.476705 kubelet[1420]: E0913 00:07:55.476417 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:55.692567 kubelet[1420]: E0913 00:07:55.692521 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:56.270096 systemd-networkd[1043]: lxc_health: Gained IPv6LL Sep 13 00:07:56.459959 kubelet[1420]: E0913 00:07:56.459911 1420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:56.477328 kubelet[1420]: E0913 00:07:56.477275 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:56.655052 systemd-networkd[1043]: lxcc1a6f644f444: Gained IPv6LL Sep 13 00:07:56.694623 kubelet[1420]: E0913 00:07:56.694560 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:57.477498 kubelet[1420]: E0913 00:07:57.477428 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:57.695734 kubelet[1420]: E0913 00:07:57.695679 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:58.479828 kubelet[1420]: E0913 00:07:58.479779 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:07:58.539442 env[1215]: time="2025-09-13T00:07:58.539360591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:58.539442 env[1215]: time="2025-09-13T00:07:58.539402640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:58.539442 env[1215]: time="2025-09-13T00:07:58.539413283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:58.539956 env[1215]: time="2025-09-13T00:07:58.539921912Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/687d6ee9976ee479a7e56915920e0f182e0485b13d76dc003f8650809ad1c6f0 pid=2511 runtime=io.containerd.runc.v2 Sep 13 00:07:58.551838 systemd[1]: Started cri-containerd-687d6ee9976ee479a7e56915920e0f182e0485b13d76dc003f8650809ad1c6f0.scope. Sep 13 00:07:58.569995 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:07:58.585672 env[1215]: time="2025-09-13T00:07:58.585632473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-n4766,Uid:a06c94a1-c1d9-4f01-a317-58aabe077082,Namespace:default,Attempt:0,} returns sandbox id \"687d6ee9976ee479a7e56915920e0f182e0485b13d76dc003f8650809ad1c6f0\"" Sep 13 00:07:58.586976 env[1215]: time="2025-09-13T00:07:58.586945156Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 13 00:07:59.480473 kubelet[1420]: E0913 00:07:59.480383 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:00.480767 kubelet[1420]: E0913 00:08:00.480722 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:00.588415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552724799.mount: Deactivated successfully. Sep 13 00:08:01.481500 kubelet[1420]: E0913 00:08:01.481451 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:01.838228 env[1215]: time="2025-09-13T00:08:01.838175429Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:01.839930 env[1215]: time="2025-09-13T00:08:01.839901984Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:01.842597 env[1215]: time="2025-09-13T00:08:01.842561167Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:01.844685 env[1215]: time="2025-09-13T00:08:01.844653140Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:01.845436 env[1215]: time="2025-09-13T00:08:01.845403899Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 13 00:08:01.849798 env[1215]: time="2025-09-13T00:08:01.849761712Z" level=info msg="CreateContainer within sandbox \"687d6ee9976ee479a7e56915920e0f182e0485b13d76dc003f8650809ad1c6f0\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 13 00:08:01.859167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4127818081.mount: Deactivated successfully. Sep 13 00:08:01.862514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011558894.mount: Deactivated successfully. Sep 13 00:08:01.864331 env[1215]: time="2025-09-13T00:08:01.864292904Z" level=info msg="CreateContainer within sandbox \"687d6ee9976ee479a7e56915920e0f182e0485b13d76dc003f8650809ad1c6f0\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"aff91a025df43369b73fc16b5892a9fb0a12ec9621359627c6dceef55eae1208\"" Sep 13 00:08:01.864748 env[1215]: time="2025-09-13T00:08:01.864697128Z" level=info msg="StartContainer for \"aff91a025df43369b73fc16b5892a9fb0a12ec9621359627c6dceef55eae1208\"" Sep 13 00:08:01.878877 systemd[1]: Started cri-containerd-aff91a025df43369b73fc16b5892a9fb0a12ec9621359627c6dceef55eae1208.scope. Sep 13 00:08:01.910263 env[1215]: time="2025-09-13T00:08:01.910222051Z" level=info msg="StartContainer for \"aff91a025df43369b73fc16b5892a9fb0a12ec9621359627c6dceef55eae1208\" returns successfully" Sep 13 00:08:02.482079 kubelet[1420]: E0913 00:08:02.482034 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:02.719113 kubelet[1420]: I0913 00:08:02.719003 1420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-n4766" podStartSLOduration=6.4587733929999995 podStartE2EDuration="9.718988278s" podCreationTimestamp="2025-09-13 00:07:53 +0000 UTC" firstStartedPulling="2025-09-13 00:07:58.586563113 +0000 UTC m=+23.148158400" lastFinishedPulling="2025-09-13 00:08:01.846777958 +0000 UTC m=+26.408373285" observedRunningTime="2025-09-13 00:08:02.718693559 +0000 UTC m=+27.280288886" watchObservedRunningTime="2025-09-13 00:08:02.718988278 +0000 UTC m=+27.280583605" Sep 13 00:08:03.482968 kubelet[1420]: E0913 00:08:03.482910 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:04.470895 systemd[1]: Created slice kubepods-besteffort-pod0fe81b47_ce3b_4343_aa6f_81c9d143fadb.slice. Sep 13 00:08:04.483570 kubelet[1420]: E0913 00:08:04.483505 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:04.644272 kubelet[1420]: I0913 00:08:04.644213 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfk5z\" (UniqueName: \"kubernetes.io/projected/0fe81b47-ce3b-4343-aa6f-81c9d143fadb-kube-api-access-qfk5z\") pod \"nfs-server-provisioner-0\" (UID: \"0fe81b47-ce3b-4343-aa6f-81c9d143fadb\") " pod="default/nfs-server-provisioner-0" Sep 13 00:08:04.644272 kubelet[1420]: I0913 00:08:04.644268 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0fe81b47-ce3b-4343-aa6f-81c9d143fadb-data\") pod \"nfs-server-provisioner-0\" (UID: \"0fe81b47-ce3b-4343-aa6f-81c9d143fadb\") " pod="default/nfs-server-provisioner-0" Sep 13 00:08:04.773768 env[1215]: time="2025-09-13T00:08:04.773392057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0fe81b47-ce3b-4343-aa6f-81c9d143fadb,Namespace:default,Attempt:0,}" Sep 13 00:08:04.805283 systemd-networkd[1043]: lxccedb4d49ff2e: Link UP Sep 13 00:08:04.814870 kernel: eth0: renamed from tmp20cfe Sep 13 00:08:04.821905 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:08:04.822008 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccedb4d49ff2e: link becomes ready Sep 13 00:08:04.822980 systemd-networkd[1043]: lxccedb4d49ff2e: Gained carrier Sep 13 00:08:04.961427 env[1215]: time="2025-09-13T00:08:04.961351484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:04.961427 env[1215]: time="2025-09-13T00:08:04.961395529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:04.961608 env[1215]: time="2025-09-13T00:08:04.961405890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:04.962550 env[1215]: time="2025-09-13T00:08:04.961764893Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20cfe1cb75f8840130f87624ab91e605ab4063f544b2079bdf9cdba38d0a77e0 pid=2641 runtime=io.containerd.runc.v2 Sep 13 00:08:04.976879 systemd[1]: Started cri-containerd-20cfe1cb75f8840130f87624ab91e605ab4063f544b2079bdf9cdba38d0a77e0.scope. Sep 13 00:08:04.996255 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:08:05.011447 env[1215]: time="2025-09-13T00:08:05.011400587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0fe81b47-ce3b-4343-aa6f-81c9d143fadb,Namespace:default,Attempt:0,} returns sandbox id \"20cfe1cb75f8840130f87624ab91e605ab4063f544b2079bdf9cdba38d0a77e0\"" Sep 13 00:08:05.013207 env[1215]: time="2025-09-13T00:08:05.013177827Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 13 00:08:05.484560 kubelet[1420]: E0913 00:08:05.484481 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:05.756936 systemd[1]: run-containerd-runc-k8s.io-20cfe1cb75f8840130f87624ab91e605ab4063f544b2079bdf9cdba38d0a77e0-runc.gMHPDa.mount: Deactivated successfully. Sep 13 00:08:06.063010 systemd-networkd[1043]: lxccedb4d49ff2e: Gained IPv6LL Sep 13 00:08:06.485116 kubelet[1420]: E0913 00:08:06.484808 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:07.149631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2629082028.mount: Deactivated successfully. Sep 13 00:08:07.485871 kubelet[1420]: E0913 00:08:07.485475 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:08.485645 kubelet[1420]: E0913 00:08:08.485592 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:08.857281 env[1215]: time="2025-09-13T00:08:08.857235660Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:08.860779 env[1215]: time="2025-09-13T00:08:08.860727593Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:08.862276 env[1215]: time="2025-09-13T00:08:08.862250659Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:08.863781 env[1215]: time="2025-09-13T00:08:08.863758763Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:08.864570 env[1215]: time="2025-09-13T00:08:08.864546438Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Sep 13 00:08:08.868514 env[1215]: time="2025-09-13T00:08:08.868485335Z" level=info msg="CreateContainer within sandbox \"20cfe1cb75f8840130f87624ab91e605ab4063f544b2079bdf9cdba38d0a77e0\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 13 00:08:08.879316 env[1215]: time="2025-09-13T00:08:08.879267365Z" level=info msg="CreateContainer within sandbox \"20cfe1cb75f8840130f87624ab91e605ab4063f544b2079bdf9cdba38d0a77e0\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ccfe7e4a4833c4ba9ff4ff2e9a7565b70ad79170cc7fd6fba5bc8f653980b2c7\"" Sep 13 00:08:08.879721 env[1215]: time="2025-09-13T00:08:08.879695926Z" level=info msg="StartContainer for \"ccfe7e4a4833c4ba9ff4ff2e9a7565b70ad79170cc7fd6fba5bc8f653980b2c7\"" Sep 13 00:08:08.899724 systemd[1]: Started cri-containerd-ccfe7e4a4833c4ba9ff4ff2e9a7565b70ad79170cc7fd6fba5bc8f653980b2c7.scope. Sep 13 00:08:08.922071 env[1215]: time="2025-09-13T00:08:08.922024010Z" level=info msg="StartContainer for \"ccfe7e4a4833c4ba9ff4ff2e9a7565b70ad79170cc7fd6fba5bc8f653980b2c7\" returns successfully" Sep 13 00:08:09.486202 kubelet[1420]: E0913 00:08:09.486152 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:09.743158 kubelet[1420]: I0913 00:08:09.743024 1420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.889817079 podStartE2EDuration="5.743008434s" podCreationTimestamp="2025-09-13 00:08:04 +0000 UTC" firstStartedPulling="2025-09-13 00:08:05.012644527 +0000 UTC m=+29.574239854" lastFinishedPulling="2025-09-13 00:08:08.865835882 +0000 UTC m=+33.427431209" observedRunningTime="2025-09-13 00:08:09.74263104 +0000 UTC m=+34.304226367" watchObservedRunningTime="2025-09-13 00:08:09.743008434 +0000 UTC m=+34.304603761" Sep 13 00:08:10.486576 kubelet[1420]: E0913 00:08:10.486515 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:11.486805 kubelet[1420]: E0913 00:08:11.486755 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:12.487538 kubelet[1420]: E0913 00:08:12.487498 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:13.488398 kubelet[1420]: E0913 00:08:13.488351 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:13.978007 update_engine[1207]: I0913 00:08:13.977957 1207 update_attempter.cc:509] Updating boot flags... Sep 13 00:08:14.181976 systemd[1]: Created slice kubepods-besteffort-podfc6e68ca_88f5_4e97_9a4d_e5fc5e40b39e.slice. Sep 13 00:08:14.287913 kubelet[1420]: I0913 00:08:14.287872 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv55b\" (UniqueName: \"kubernetes.io/projected/fc6e68ca-88f5-4e97-9a4d-e5fc5e40b39e-kube-api-access-dv55b\") pod \"test-pod-1\" (UID: \"fc6e68ca-88f5-4e97-9a4d-e5fc5e40b39e\") " pod="default/test-pod-1" Sep 13 00:08:14.287913 kubelet[1420]: I0913 00:08:14.287913 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7a7f1d37-d219-4266-a771-28774fcfbb60\" (UniqueName: \"kubernetes.io/nfs/fc6e68ca-88f5-4e97-9a4d-e5fc5e40b39e-pvc-7a7f1d37-d219-4266-a771-28774fcfbb60\") pod \"test-pod-1\" (UID: \"fc6e68ca-88f5-4e97-9a4d-e5fc5e40b39e\") " pod="default/test-pod-1" Sep 13 00:08:14.421178 kernel: FS-Cache: Loaded Sep 13 00:08:14.452249 kernel: RPC: Registered named UNIX socket transport module. Sep 13 00:08:14.452367 kernel: RPC: Registered udp transport module. Sep 13 00:08:14.452388 kernel: RPC: Registered tcp transport module. Sep 13 00:08:14.452407 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 13 00:08:14.488927 kubelet[1420]: E0913 00:08:14.488880 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:14.499994 kernel: FS-Cache: Netfs 'nfs' registered for caching Sep 13 00:08:14.634909 kernel: NFS: Registering the id_resolver key type Sep 13 00:08:14.635047 kernel: Key type id_resolver registered Sep 13 00:08:14.635076 kernel: Key type id_legacy registered Sep 13 00:08:14.678531 nfsidmap[2773]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 13 00:08:14.685679 nfsidmap[2776]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 13 00:08:14.785241 env[1215]: time="2025-09-13T00:08:14.785175516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fc6e68ca-88f5-4e97-9a4d-e5fc5e40b39e,Namespace:default,Attempt:0,}" Sep 13 00:08:14.820541 systemd-networkd[1043]: lxcc3750b72b759: Link UP Sep 13 00:08:14.826875 kernel: eth0: renamed from tmp6478b Sep 13 00:08:14.831920 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:08:14.831986 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc3750b72b759: link becomes ready Sep 13 00:08:14.832574 systemd-networkd[1043]: lxcc3750b72b759: Gained carrier Sep 13 00:08:15.059522 env[1215]: time="2025-09-13T00:08:15.059428576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:15.059522 env[1215]: time="2025-09-13T00:08:15.059484139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:15.059522 env[1215]: time="2025-09-13T00:08:15.059494540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:15.059906 env[1215]: time="2025-09-13T00:08:15.059836523Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6478b3a74413723fbca835f6507530fae9c984458af82ead7136a311b1902e29 pid=2808 runtime=io.containerd.runc.v2 Sep 13 00:08:15.075879 systemd[1]: Started cri-containerd-6478b3a74413723fbca835f6507530fae9c984458af82ead7136a311b1902e29.scope. Sep 13 00:08:15.097927 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:08:15.114046 env[1215]: time="2025-09-13T00:08:15.114004870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fc6e68ca-88f5-4e97-9a4d-e5fc5e40b39e,Namespace:default,Attempt:0,} returns sandbox id \"6478b3a74413723fbca835f6507530fae9c984458af82ead7136a311b1902e29\"" Sep 13 00:08:15.116758 env[1215]: time="2025-09-13T00:08:15.116707089Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 13 00:08:15.456633 env[1215]: time="2025-09-13T00:08:15.455966955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:15.460010 env[1215]: time="2025-09-13T00:08:15.459972100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:15.462533 env[1215]: time="2025-09-13T00:08:15.462505108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:15.464582 env[1215]: time="2025-09-13T00:08:15.464537682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:15.465887 env[1215]: time="2025-09-13T00:08:15.465503706Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 13 00:08:15.474476 env[1215]: time="2025-09-13T00:08:15.474428537Z" level=info msg="CreateContainer within sandbox \"6478b3a74413723fbca835f6507530fae9c984458af82ead7136a311b1902e29\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 13 00:08:15.489586 kubelet[1420]: E0913 00:08:15.489537 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:15.503640 env[1215]: time="2025-09-13T00:08:15.503382215Z" level=info msg="CreateContainer within sandbox \"6478b3a74413723fbca835f6507530fae9c984458af82ead7136a311b1902e29\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"83c0e813fa8504453844cff33b25c4e67b982cf39d390741a17e2e8b5d9cb63f\"" Sep 13 00:08:15.503990 env[1215]: time="2025-09-13T00:08:15.503962453Z" level=info msg="StartContainer for \"83c0e813fa8504453844cff33b25c4e67b982cf39d390741a17e2e8b5d9cb63f\"" Sep 13 00:08:15.521713 systemd[1]: Started cri-containerd-83c0e813fa8504453844cff33b25c4e67b982cf39d390741a17e2e8b5d9cb63f.scope. Sep 13 00:08:15.563125 env[1215]: time="2025-09-13T00:08:15.563066127Z" level=info msg="StartContainer for \"83c0e813fa8504453844cff33b25c4e67b982cf39d390741a17e2e8b5d9cb63f\" returns successfully" Sep 13 00:08:15.766696 kubelet[1420]: I0913 00:08:15.765408 1420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=11.414181788 podStartE2EDuration="11.765392605s" podCreationTimestamp="2025-09-13 00:08:04 +0000 UTC" firstStartedPulling="2025-09-13 00:08:15.115934558 +0000 UTC m=+39.677529845" lastFinishedPulling="2025-09-13 00:08:15.467145335 +0000 UTC m=+40.028740662" observedRunningTime="2025-09-13 00:08:15.765220314 +0000 UTC m=+40.326815601" watchObservedRunningTime="2025-09-13 00:08:15.765392605 +0000 UTC m=+40.326987892" Sep 13 00:08:16.459193 kubelet[1420]: E0913 00:08:16.458915 1420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:16.490373 kubelet[1420]: E0913 00:08:16.490305 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:16.814054 systemd-networkd[1043]: lxcc3750b72b759: Gained IPv6LL Sep 13 00:08:17.490512 kubelet[1420]: E0913 00:08:17.490466 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:18.491207 kubelet[1420]: E0913 00:08:18.491152 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:19.492162 kubelet[1420]: E0913 00:08:19.492108 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:20.492495 kubelet[1420]: E0913 00:08:20.492441 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:21.493610 kubelet[1420]: E0913 00:08:21.493554 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:22.494539 kubelet[1420]: E0913 00:08:22.494351 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:22.497274 env[1215]: time="2025-09-13T00:08:22.497182223Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:08:22.504911 env[1215]: time="2025-09-13T00:08:22.504750423Z" level=info msg="StopContainer for \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\" with timeout 2 (s)" Sep 13 00:08:22.506170 env[1215]: time="2025-09-13T00:08:22.506129568Z" level=info msg="Stop container \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\" with signal terminated" Sep 13 00:08:22.511750 systemd-networkd[1043]: lxc_health: Link DOWN Sep 13 00:08:22.511755 systemd-networkd[1043]: lxc_health: Lost carrier Sep 13 00:08:22.545428 systemd[1]: cri-containerd-34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5.scope: Deactivated successfully. Sep 13 00:08:22.546099 systemd[1]: cri-containerd-34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5.scope: Consumed 6.298s CPU time. Sep 13 00:08:22.569563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5-rootfs.mount: Deactivated successfully. Sep 13 00:08:22.666470 env[1215]: time="2025-09-13T00:08:22.666423630Z" level=info msg="shim disconnected" id=34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5 Sep 13 00:08:22.666470 env[1215]: time="2025-09-13T00:08:22.666467512Z" level=warning msg="cleaning up after shim disconnected" id=34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5 namespace=k8s.io Sep 13 00:08:22.666470 env[1215]: time="2025-09-13T00:08:22.666477953Z" level=info msg="cleaning up dead shim" Sep 13 00:08:22.675402 env[1215]: time="2025-09-13T00:08:22.675355855Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2939 runtime=io.containerd.runc.v2\n" Sep 13 00:08:22.678199 env[1215]: time="2025-09-13T00:08:22.678152468Z" level=info msg="StopContainer for \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\" returns successfully" Sep 13 00:08:22.678788 env[1215]: time="2025-09-13T00:08:22.678757857Z" level=info msg="StopPodSandbox for \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\"" Sep 13 00:08:22.678934 env[1215]: time="2025-09-13T00:08:22.678813140Z" level=info msg="Container to stop \"e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:22.678934 env[1215]: time="2025-09-13T00:08:22.678828420Z" level=info msg="Container to stop \"0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:22.678934 env[1215]: time="2025-09-13T00:08:22.678839101Z" level=info msg="Container to stop \"e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:22.678934 env[1215]: time="2025-09-13T00:08:22.678865102Z" level=info msg="Container to stop \"526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:22.678934 env[1215]: time="2025-09-13T00:08:22.678877423Z" level=info msg="Container to stop \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:22.681903 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a-shm.mount: Deactivated successfully. Sep 13 00:08:22.685760 systemd[1]: cri-containerd-0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a.scope: Deactivated successfully. Sep 13 00:08:22.710309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a-rootfs.mount: Deactivated successfully. Sep 13 00:08:22.715067 env[1215]: time="2025-09-13T00:08:22.715015341Z" level=info msg="shim disconnected" id=0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a Sep 13 00:08:22.715067 env[1215]: time="2025-09-13T00:08:22.715067904Z" level=warning msg="cleaning up after shim disconnected" id=0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a namespace=k8s.io Sep 13 00:08:22.715266 env[1215]: time="2025-09-13T00:08:22.715077784Z" level=info msg="cleaning up dead shim" Sep 13 00:08:22.721703 env[1215]: time="2025-09-13T00:08:22.721652777Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2969 runtime=io.containerd.runc.v2\n" Sep 13 00:08:22.722034 env[1215]: time="2025-09-13T00:08:22.721996353Z" level=info msg="TearDown network for sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" successfully" Sep 13 00:08:22.722034 env[1215]: time="2025-09-13T00:08:22.722026274Z" level=info msg="StopPodSandbox for \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" returns successfully" Sep 13 00:08:22.761768 kubelet[1420]: I0913 00:08:22.761565 1420 scope.go:117] "RemoveContainer" containerID="34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5" Sep 13 00:08:22.764727 env[1215]: time="2025-09-13T00:08:22.764668182Z" level=info msg="RemoveContainer for \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\"" Sep 13 00:08:22.771770 env[1215]: time="2025-09-13T00:08:22.771719877Z" level=info msg="RemoveContainer for \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\" returns successfully" Sep 13 00:08:22.772035 kubelet[1420]: I0913 00:08:22.772007 1420 scope.go:117] "RemoveContainer" containerID="526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d" Sep 13 00:08:22.773394 env[1215]: time="2025-09-13T00:08:22.773362636Z" level=info msg="RemoveContainer for \"526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d\"" Sep 13 00:08:22.781296 env[1215]: time="2025-09-13T00:08:22.781248210Z" level=info msg="RemoveContainer for \"526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d\" returns successfully" Sep 13 00:08:22.781578 kubelet[1420]: I0913 00:08:22.781540 1420 scope.go:117] "RemoveContainer" containerID="e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78" Sep 13 00:08:22.782790 env[1215]: time="2025-09-13T00:08:22.782760602Z" level=info msg="RemoveContainer for \"e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78\"" Sep 13 00:08:22.791429 env[1215]: time="2025-09-13T00:08:22.791380412Z" level=info msg="RemoveContainer for \"e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78\" returns successfully" Sep 13 00:08:22.791655 kubelet[1420]: I0913 00:08:22.791629 1420 scope.go:117] "RemoveContainer" containerID="0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0" Sep 13 00:08:22.792939 env[1215]: time="2025-09-13T00:08:22.792876763Z" level=info msg="RemoveContainer for \"0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0\"" Sep 13 00:08:22.813401 env[1215]: time="2025-09-13T00:08:22.813346977Z" level=info msg="RemoveContainer for \"0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0\" returns successfully" Sep 13 00:08:22.813800 kubelet[1420]: I0913 00:08:22.813674 1420 scope.go:117] "RemoveContainer" containerID="e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd" Sep 13 00:08:22.814907 env[1215]: time="2025-09-13T00:08:22.814875770Z" level=info msg="RemoveContainer for \"e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd\"" Sep 13 00:08:22.842312 kubelet[1420]: I0913 00:08:22.842271 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-xtables-lock\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842312 kubelet[1420]: I0913 00:08:22.842309 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cni-path\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842468 kubelet[1420]: I0913 00:08:22.842344 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8aeb6b79-8d41-4c0b-9365-90ec4d029386-hubble-tls\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842468 kubelet[1420]: I0913 00:08:22.842359 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cilium-run\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842468 kubelet[1420]: I0913 00:08:22.842372 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-bpf-maps\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842468 kubelet[1420]: I0913 00:08:22.842386 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-hostproc\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842468 kubelet[1420]: I0913 00:08:22.842410 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cilium-config-path\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842468 kubelet[1420]: I0913 00:08:22.842429 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-lib-modules\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842627 kubelet[1420]: I0913 00:08:22.842443 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-host-proc-sys-net\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842627 kubelet[1420]: I0913 00:08:22.842461 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cilium-cgroup\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842627 kubelet[1420]: I0913 00:08:22.842485 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-host-proc-sys-kernel\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842627 kubelet[1420]: I0913 00:08:22.842523 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4cnv\" (UniqueName: \"kubernetes.io/projected/8aeb6b79-8d41-4c0b-9365-90ec4d029386-kube-api-access-p4cnv\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842627 kubelet[1420]: I0913 00:08:22.842537 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-etc-cni-netd\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842627 kubelet[1420]: I0913 00:08:22.842560 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8aeb6b79-8d41-4c0b-9365-90ec4d029386-clustermesh-secrets\") pod \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\" (UID: \"8aeb6b79-8d41-4c0b-9365-90ec4d029386\") " Sep 13 00:08:22.842755 env[1215]: time="2025-09-13T00:08:22.842461361Z" level=info msg="RemoveContainer for \"e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd\" returns successfully" Sep 13 00:08:22.842927 kubelet[1420]: I0913 00:08:22.842908 1420 scope.go:117] "RemoveContainer" containerID="34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5" Sep 13 00:08:22.843274 env[1215]: time="2025-09-13T00:08:22.843191676Z" level=error msg="ContainerStatus for \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\": not found" Sep 13 00:08:22.844934 kubelet[1420]: I0913 00:08:22.844903 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:08:22.845085 kubelet[1420]: I0913 00:08:22.845047 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.845142 kubelet[1420]: I0913 00:08:22.845107 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cni-path" (OuterVolumeSpecName: "cni-path") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.845244 kubelet[1420]: E0913 00:08:22.845219 1420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\": not found" containerID="34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5" Sep 13 00:08:22.845353 kubelet[1420]: I0913 00:08:22.845310 1420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5"} err="failed to get container status \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"34c24973e1c01fae2e9e1a3e665a4016d0f42687b1fe2c17d0654528da77f1c5\": not found" Sep 13 00:08:22.845446 kubelet[1420]: I0913 00:08:22.845408 1420 scope.go:117] "RemoveContainer" containerID="526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d" Sep 13 00:08:22.845607 kubelet[1420]: I0913 00:08:22.845576 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.845702 kubelet[1420]: I0913 00:08:22.845682 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.845745 env[1215]: time="2025-09-13T00:08:22.845680914Z" level=error msg="ContainerStatus for \"526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d\": not found" Sep 13 00:08:22.845859 kubelet[1420]: E0913 00:08:22.845827 1420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d\": not found" containerID="526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d" Sep 13 00:08:22.845947 kubelet[1420]: I0913 00:08:22.845928 1420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d"} err="failed to get container status \"526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d\": rpc error: code = NotFound desc = an error occurred when try to find container \"526fa6035f3e1b5e2b3dffdedc0e596ef69938d8f18fc4419d8c5a6ab97cec7d\": not found" Sep 13 00:08:22.846007 kubelet[1420]: I0913 00:08:22.845996 1420 scope.go:117] "RemoveContainer" containerID="e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78" Sep 13 00:08:22.847072 kubelet[1420]: I0913 00:08:22.846195 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.847072 kubelet[1420]: I0913 00:08:22.846222 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.847072 kubelet[1420]: I0913 00:08:22.846240 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.847072 kubelet[1420]: I0913 00:08:22.846256 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-hostproc" (OuterVolumeSpecName: "hostproc") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.847072 kubelet[1420]: I0913 00:08:22.846633 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.847267 env[1215]: time="2025-09-13T00:08:22.846805608Z" level=error msg="ContainerStatus for \"e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78\": not found" Sep 13 00:08:22.847305 kubelet[1420]: I0913 00:08:22.846682 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.847305 kubelet[1420]: E0913 00:08:22.846967 1420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78\": not found" containerID="e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78" Sep 13 00:08:22.847305 kubelet[1420]: I0913 00:08:22.846990 1420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78"} err="failed to get container status \"e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78\": rpc error: code = NotFound desc = an error occurred when try to find container \"e83020e1064e616efde4ac391a5e27d51f2ddfc248614085cf29ff690a163a78\": not found" Sep 13 00:08:22.847305 kubelet[1420]: I0913 00:08:22.847005 1420 scope.go:117] "RemoveContainer" containerID="0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0" Sep 13 00:08:22.847643 env[1215]: time="2025-09-13T00:08:22.847599046Z" level=error msg="ContainerStatus for \"0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0\": not found" Sep 13 00:08:22.847963 kubelet[1420]: E0913 00:08:22.847829 1420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0\": not found" containerID="0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0" Sep 13 00:08:22.847963 kubelet[1420]: I0913 00:08:22.847878 1420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0"} err="failed to get container status \"0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0\": rpc error: code = NotFound desc = an error occurred when try to find container \"0887ddf4fddd4894fd1716fd9df2d558cd816619ccbb673d37762b17b95e9de0\": not found" Sep 13 00:08:22.847963 kubelet[1420]: I0913 00:08:22.847904 1420 scope.go:117] "RemoveContainer" containerID="e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd" Sep 13 00:08:22.848258 env[1215]: time="2025-09-13T00:08:22.848211475Z" level=error msg="ContainerStatus for \"e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd\": not found" Sep 13 00:08:22.848497 kubelet[1420]: E0913 00:08:22.848444 1420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd\": not found" containerID="e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd" Sep 13 00:08:22.848497 kubelet[1420]: I0913 00:08:22.848469 1420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd"} err="failed to get container status \"e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd\": rpc error: code = NotFound desc = an error occurred when try to find container \"e216daed6b0fc7aaaf02d020a5d4d70510e64cf46f90284bfb0ed20bd26b1ebd\": not found" Sep 13 00:08:22.853480 systemd[1]: var-lib-kubelet-pods-8aeb6b79\x2d8d41\x2d4c0b\x2d9365\x2d90ec4d029386-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp4cnv.mount: Deactivated successfully. Sep 13 00:08:22.853586 systemd[1]: var-lib-kubelet-pods-8aeb6b79\x2d8d41\x2d4c0b\x2d9365\x2d90ec4d029386-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:08:22.855144 kubelet[1420]: I0913 00:08:22.855109 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aeb6b79-8d41-4c0b-9365-90ec4d029386-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:08:22.855389 kubelet[1420]: I0913 00:08:22.855353 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aeb6b79-8d41-4c0b-9365-90ec4d029386-kube-api-access-p4cnv" (OuterVolumeSpecName: "kube-api-access-p4cnv") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "kube-api-access-p4cnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:08:22.856630 kubelet[1420]: I0913 00:08:22.856583 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aeb6b79-8d41-4c0b-9365-90ec4d029386-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8aeb6b79-8d41-4c0b-9365-90ec4d029386" (UID: "8aeb6b79-8d41-4c0b-9365-90ec4d029386"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:08:22.942995 kubelet[1420]: I0913 00:08:22.942954 1420 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p4cnv\" (UniqueName: \"kubernetes.io/projected/8aeb6b79-8d41-4c0b-9365-90ec4d029386-kube-api-access-p4cnv\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:22.943192 kubelet[1420]: I0913 00:08:22.943178 1420 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-etc-cni-netd\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:22.943255 kubelet[1420]: I0913 00:08:22.943245 1420 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8aeb6b79-8d41-4c0b-9365-90ec4d029386-clustermesh-secrets\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:22.943322 kubelet[1420]: I0913 00:08:22.943312 1420 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-xtables-lock\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:22.943384 kubelet[1420]: I0913 00:08:22.943373 1420 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cni-path\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:22.943443 kubelet[1420]: I0913 00:08:22.943432 1420 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8aeb6b79-8d41-4c0b-9365-90ec4d029386-hubble-tls\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:22.943500 kubelet[1420]: I0913 00:08:22.943490 1420 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cilium-run\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:22.943557 kubelet[1420]: I0913 00:08:22.943547 1420 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-bpf-maps\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:22.943609 kubelet[1420]: I0913 00:08:22.943600 1420 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-hostproc\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:22.943671 kubelet[1420]: I0913 00:08:22.943660 1420 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cilium-config-path\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:22.943731 kubelet[1420]: I0913 00:08:22.943721 1420 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-lib-modules\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:22.943789 kubelet[1420]: I0913 00:08:22.943778 1420 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-host-proc-sys-net\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:22.943841 kubelet[1420]: I0913 00:08:22.943831 1420 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-cilium-cgroup\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:22.943950 kubelet[1420]: I0913 00:08:22.943939 1420 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8aeb6b79-8d41-4c0b-9365-90ec4d029386-host-proc-sys-kernel\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:23.065711 systemd[1]: Removed slice kubepods-burstable-pod8aeb6b79_8d41_4c0b_9365_90ec4d029386.slice. Sep 13 00:08:23.065790 systemd[1]: kubepods-burstable-pod8aeb6b79_8d41_4c0b_9365_90ec4d029386.slice: Consumed 6.417s CPU time. Sep 13 00:08:23.473723 systemd[1]: var-lib-kubelet-pods-8aeb6b79\x2d8d41\x2d4c0b\x2d9365\x2d90ec4d029386-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:08:23.494732 kubelet[1420]: E0913 00:08:23.494681 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:24.494837 kubelet[1420]: E0913 00:08:24.494785 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:24.651569 kubelet[1420]: I0913 00:08:24.651531 1420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aeb6b79-8d41-4c0b-9365-90ec4d029386" path="/var/lib/kubelet/pods/8aeb6b79-8d41-4c0b-9365-90ec4d029386/volumes" Sep 13 00:08:25.495404 kubelet[1420]: E0913 00:08:25.495368 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:25.511883 systemd[1]: Created slice kubepods-burstable-pod872a5cce_6fba_45c4_b0e8_df80aca3b80a.slice. Sep 13 00:08:25.528672 systemd[1]: Created slice kubepods-besteffort-podb487eb10_ac21_4cc2_ae39_cb941d94acff.slice. Sep 13 00:08:25.659435 kubelet[1420]: I0913 00:08:25.659383 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-config-path\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.659678 kubelet[1420]: I0913 00:08:25.659655 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxgfk\" (UniqueName: \"kubernetes.io/projected/b487eb10-ac21-4cc2-ae39-cb941d94acff-kube-api-access-dxgfk\") pod \"cilium-operator-6c4d7847fc-9qc68\" (UID: \"b487eb10-ac21-4cc2-ae39-cb941d94acff\") " pod="kube-system/cilium-operator-6c4d7847fc-9qc68" Sep 13 00:08:25.659816 kubelet[1420]: I0913 00:08:25.659793 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-bpf-maps\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.660017 kubelet[1420]: I0913 00:08:25.659973 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-cgroup\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.660121 kubelet[1420]: I0913 00:08:25.660106 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b487eb10-ac21-4cc2-ae39-cb941d94acff-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-9qc68\" (UID: \"b487eb10-ac21-4cc2-ae39-cb941d94acff\") " pod="kube-system/cilium-operator-6c4d7847fc-9qc68" Sep 13 00:08:25.660253 kubelet[1420]: I0913 00:08:25.660238 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-lib-modules\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.660364 kubelet[1420]: I0913 00:08:25.660350 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-xtables-lock\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.660483 kubelet[1420]: I0913 00:08:25.660469 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-host-proc-sys-net\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.660643 kubelet[1420]: I0913 00:08:25.660576 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-host-proc-sys-kernel\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.660742 kubelet[1420]: I0913 00:08:25.660727 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-run\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.660882 kubelet[1420]: I0913 00:08:25.660867 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cni-path\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.661004 kubelet[1420]: I0913 00:08:25.660981 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-etc-cni-netd\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.661112 kubelet[1420]: I0913 00:08:25.661100 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-hostproc\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.661254 kubelet[1420]: I0913 00:08:25.661232 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/872a5cce-6fba-45c4-b0e8-df80aca3b80a-clustermesh-secrets\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.661363 kubelet[1420]: I0913 00:08:25.661349 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/872a5cce-6fba-45c4-b0e8-df80aca3b80a-hubble-tls\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.661490 kubelet[1420]: I0913 00:08:25.661467 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75dxx\" (UniqueName: \"kubernetes.io/projected/872a5cce-6fba-45c4-b0e8-df80aca3b80a-kube-api-access-75dxx\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.661609 kubelet[1420]: I0913 00:08:25.661590 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-ipsec-secrets\") pod \"cilium-kf8vc\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " pod="kube-system/cilium-kf8vc" Sep 13 00:08:25.663380 kubelet[1420]: E0913 00:08:25.663334 1420 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-75dxx lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-kf8vc" podUID="872a5cce-6fba-45c4-b0e8-df80aca3b80a" Sep 13 00:08:25.860635 kubelet[1420]: E0913 00:08:25.860598 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:25.861653 env[1215]: time="2025-09-13T00:08:25.861311574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9qc68,Uid:b487eb10-ac21-4cc2-ae39-cb941d94acff,Namespace:kube-system,Attempt:0,}" Sep 13 00:08:25.874380 env[1215]: time="2025-09-13T00:08:25.874312078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:25.874380 env[1215]: time="2025-09-13T00:08:25.874353279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:25.874380 env[1215]: time="2025-09-13T00:08:25.874364440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:25.874625 env[1215]: time="2025-09-13T00:08:25.874569488Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3852fd8270419e574fb3f790df6c6f9bd8a781b35fd9005277e0de349174717f pid=2997 runtime=io.containerd.runc.v2 Sep 13 00:08:25.885463 systemd[1]: Started cri-containerd-3852fd8270419e574fb3f790df6c6f9bd8a781b35fd9005277e0de349174717f.scope. Sep 13 00:08:25.919135 env[1215]: time="2025-09-13T00:08:25.919036507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9qc68,Uid:b487eb10-ac21-4cc2-ae39-cb941d94acff,Namespace:kube-system,Attempt:0,} returns sandbox id \"3852fd8270419e574fb3f790df6c6f9bd8a781b35fd9005277e0de349174717f\"" Sep 13 00:08:25.920552 kubelet[1420]: E0913 00:08:25.920368 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:25.921454 env[1215]: time="2025-09-13T00:08:25.921425927Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:08:25.964692 kubelet[1420]: I0913 00:08:25.964638 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-host-proc-sys-net\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.964692 kubelet[1420]: I0913 00:08:25.964674 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-hostproc\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.964692 kubelet[1420]: I0913 00:08:25.964690 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-cgroup\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.964692 kubelet[1420]: I0913 00:08:25.964704 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-lib-modules\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.964945 kubelet[1420]: I0913 00:08:25.964717 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cni-path\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.964945 kubelet[1420]: I0913 00:08:25.964736 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/872a5cce-6fba-45c4-b0e8-df80aca3b80a-clustermesh-secrets\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.964945 kubelet[1420]: I0913 00:08:25.964750 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-xtables-lock\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.964945 kubelet[1420]: I0913 00:08:25.964765 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/872a5cce-6fba-45c4-b0e8-df80aca3b80a-hubble-tls\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.964945 kubelet[1420]: I0913 00:08:25.964784 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75dxx\" (UniqueName: \"kubernetes.io/projected/872a5cce-6fba-45c4-b0e8-df80aca3b80a-kube-api-access-75dxx\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.964945 kubelet[1420]: I0913 00:08:25.964799 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-ipsec-secrets\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.965080 kubelet[1420]: I0913 00:08:25.964816 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-config-path\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.965080 kubelet[1420]: I0913 00:08:25.964829 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-bpf-maps\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.965080 kubelet[1420]: I0913 00:08:25.964859 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-host-proc-sys-kernel\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.965080 kubelet[1420]: I0913 00:08:25.964875 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-run\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.965080 kubelet[1420]: I0913 00:08:25.964891 1420 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-etc-cni-netd\") pod \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\" (UID: \"872a5cce-6fba-45c4-b0e8-df80aca3b80a\") " Sep 13 00:08:25.965080 kubelet[1420]: I0913 00:08:25.964938 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:25.965238 kubelet[1420]: I0913 00:08:25.964944 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:25.965238 kubelet[1420]: I0913 00:08:25.964989 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:25.965238 kubelet[1420]: I0913 00:08:25.965007 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-hostproc" (OuterVolumeSpecName: "hostproc") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:25.965238 kubelet[1420]: I0913 00:08:25.965021 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:25.965238 kubelet[1420]: I0913 00:08:25.965034 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:25.965353 kubelet[1420]: I0913 00:08:25.965046 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cni-path" (OuterVolumeSpecName: "cni-path") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:25.965596 kubelet[1420]: I0913 00:08:25.965422 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:25.965596 kubelet[1420]: I0913 00:08:25.965467 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:25.965596 kubelet[1420]: I0913 00:08:25.965488 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:25.967050 kubelet[1420]: I0913 00:08:25.967015 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:08:25.967664 kubelet[1420]: I0913 00:08:25.967629 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872a5cce-6fba-45c4-b0e8-df80aca3b80a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:08:25.968275 kubelet[1420]: I0913 00:08:25.968244 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872a5cce-6fba-45c4-b0e8-df80aca3b80a-kube-api-access-75dxx" (OuterVolumeSpecName: "kube-api-access-75dxx") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "kube-api-access-75dxx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:08:25.968471 kubelet[1420]: I0913 00:08:25.968429 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872a5cce-6fba-45c4-b0e8-df80aca3b80a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:08:25.969427 kubelet[1420]: I0913 00:08:25.969389 1420 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "872a5cce-6fba-45c4-b0e8-df80aca3b80a" (UID: "872a5cce-6fba-45c4-b0e8-df80aca3b80a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:08:26.065858 kubelet[1420]: I0913 00:08:26.065801 1420 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-host-proc-sys-net\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.065858 kubelet[1420]: I0913 00:08:26.065836 1420 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-hostproc\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.065858 kubelet[1420]: I0913 00:08:26.065865 1420 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-cgroup\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.066025 kubelet[1420]: I0913 00:08:26.065878 1420 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-lib-modules\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.066025 kubelet[1420]: I0913 00:08:26.065886 1420 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cni-path\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.066025 kubelet[1420]: I0913 00:08:26.065895 1420 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/872a5cce-6fba-45c4-b0e8-df80aca3b80a-clustermesh-secrets\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.066025 kubelet[1420]: I0913 00:08:26.065902 1420 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-xtables-lock\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.066025 kubelet[1420]: I0913 00:08:26.065910 1420 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/872a5cce-6fba-45c4-b0e8-df80aca3b80a-hubble-tls\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.066025 kubelet[1420]: I0913 00:08:26.065918 1420 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-75dxx\" (UniqueName: \"kubernetes.io/projected/872a5cce-6fba-45c4-b0e8-df80aca3b80a-kube-api-access-75dxx\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.066025 kubelet[1420]: I0913 00:08:26.065929 1420 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-ipsec-secrets\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.066025 kubelet[1420]: I0913 00:08:26.065937 1420 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-config-path\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.066243 kubelet[1420]: I0913 00:08:26.065944 1420 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-bpf-maps\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.066243 kubelet[1420]: I0913 00:08:26.065952 1420 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-host-proc-sys-kernel\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.066243 kubelet[1420]: I0913 00:08:26.065960 1420 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-cilium-run\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.066243 kubelet[1420]: I0913 00:08:26.065967 1420 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/872a5cce-6fba-45c4-b0e8-df80aca3b80a-etc-cni-netd\") on node \"10.0.0.29\" DevicePath \"\"" Sep 13 00:08:26.495765 kubelet[1420]: E0913 00:08:26.495698 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:26.624860 kubelet[1420]: E0913 00:08:26.624807 1420 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:08:26.653753 systemd[1]: Removed slice kubepods-burstable-pod872a5cce_6fba_45c4_b0e8_df80aca3b80a.slice. Sep 13 00:08:26.768009 systemd[1]: var-lib-kubelet-pods-872a5cce\x2d6fba\x2d45c4\x2db0e8\x2ddf80aca3b80a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d75dxx.mount: Deactivated successfully. Sep 13 00:08:26.768100 systemd[1]: var-lib-kubelet-pods-872a5cce\x2d6fba\x2d45c4\x2db0e8\x2ddf80aca3b80a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:08:26.768157 systemd[1]: var-lib-kubelet-pods-872a5cce\x2d6fba\x2d45c4\x2db0e8\x2ddf80aca3b80a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:08:26.768204 systemd[1]: var-lib-kubelet-pods-872a5cce\x2d6fba\x2d45c4\x2db0e8\x2ddf80aca3b80a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:08:26.819954 systemd[1]: Created slice kubepods-burstable-pode54847cf_af8e_4360_92f9_5531677da54e.slice. Sep 13 00:08:26.970462 kubelet[1420]: I0913 00:08:26.970417 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e54847cf-af8e-4360-92f9-5531677da54e-cilium-ipsec-secrets\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970610 kubelet[1420]: I0913 00:08:26.970472 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e54847cf-af8e-4360-92f9-5531677da54e-host-proc-sys-kernel\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970610 kubelet[1420]: I0913 00:08:26.970506 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e54847cf-af8e-4360-92f9-5531677da54e-cilium-cgroup\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970610 kubelet[1420]: I0913 00:08:26.970537 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e54847cf-af8e-4360-92f9-5531677da54e-cni-path\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970610 kubelet[1420]: I0913 00:08:26.970563 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e54847cf-af8e-4360-92f9-5531677da54e-host-proc-sys-net\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970610 kubelet[1420]: I0913 00:08:26.970601 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e54847cf-af8e-4360-92f9-5531677da54e-hubble-tls\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970729 kubelet[1420]: I0913 00:08:26.970630 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e54847cf-af8e-4360-92f9-5531677da54e-hostproc\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970729 kubelet[1420]: I0913 00:08:26.970653 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e54847cf-af8e-4360-92f9-5531677da54e-cilium-config-path\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970729 kubelet[1420]: I0913 00:08:26.970675 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56558\" (UniqueName: \"kubernetes.io/projected/e54847cf-af8e-4360-92f9-5531677da54e-kube-api-access-56558\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970729 kubelet[1420]: I0913 00:08:26.970694 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e54847cf-af8e-4360-92f9-5531677da54e-cilium-run\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970729 kubelet[1420]: I0913 00:08:26.970712 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e54847cf-af8e-4360-92f9-5531677da54e-bpf-maps\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970841 kubelet[1420]: I0913 00:08:26.970760 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e54847cf-af8e-4360-92f9-5531677da54e-etc-cni-netd\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970841 kubelet[1420]: I0913 00:08:26.970800 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e54847cf-af8e-4360-92f9-5531677da54e-lib-modules\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970841 kubelet[1420]: I0913 00:08:26.970817 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e54847cf-af8e-4360-92f9-5531677da54e-clustermesh-secrets\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:26.970841 kubelet[1420]: I0913 00:08:26.970834 1420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e54847cf-af8e-4360-92f9-5531677da54e-xtables-lock\") pod \"cilium-4dptq\" (UID: \"e54847cf-af8e-4360-92f9-5531677da54e\") " pod="kube-system/cilium-4dptq" Sep 13 00:08:27.133253 kubelet[1420]: E0913 00:08:27.133203 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:27.133725 env[1215]: time="2025-09-13T00:08:27.133683594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dptq,Uid:e54847cf-af8e-4360-92f9-5531677da54e,Namespace:kube-system,Attempt:0,}" Sep 13 00:08:27.146680 env[1215]: time="2025-09-13T00:08:27.146610252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:27.146680 env[1215]: time="2025-09-13T00:08:27.146652813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:27.146680 env[1215]: time="2025-09-13T00:08:27.146663374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:27.146903 env[1215]: time="2025-09-13T00:08:27.146773138Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24a2b86cc3e6f6bde6977aa9460ce2a4e8e10f0ccd5e5ea9435b32fa45e30b6c pid=3048 runtime=io.containerd.runc.v2 Sep 13 00:08:27.158247 systemd[1]: Started cri-containerd-24a2b86cc3e6f6bde6977aa9460ce2a4e8e10f0ccd5e5ea9435b32fa45e30b6c.scope. Sep 13 00:08:27.194101 env[1215]: time="2025-09-13T00:08:27.194054040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dptq,Uid:e54847cf-af8e-4360-92f9-5531677da54e,Namespace:kube-system,Attempt:0,} returns sandbox id \"24a2b86cc3e6f6bde6977aa9460ce2a4e8e10f0ccd5e5ea9435b32fa45e30b6c\"" Sep 13 00:08:27.195709 kubelet[1420]: E0913 00:08:27.194792 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:27.199756 env[1215]: time="2025-09-13T00:08:27.199707538Z" level=info msg="CreateContainer within sandbox \"24a2b86cc3e6f6bde6977aa9460ce2a4e8e10f0ccd5e5ea9435b32fa45e30b6c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:08:27.210975 env[1215]: time="2025-09-13T00:08:27.210903969Z" level=info msg="CreateContainer within sandbox \"24a2b86cc3e6f6bde6977aa9460ce2a4e8e10f0ccd5e5ea9435b32fa45e30b6c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"abe6d49b540df9811a3aad068d1f96c340da27a17f9ee20953d4089dc0f946bd\"" Sep 13 00:08:27.211673 env[1215]: time="2025-09-13T00:08:27.211643158Z" level=info msg="StartContainer for \"abe6d49b540df9811a3aad068d1f96c340da27a17f9ee20953d4089dc0f946bd\"" Sep 13 00:08:27.228315 systemd[1]: Started cri-containerd-abe6d49b540df9811a3aad068d1f96c340da27a17f9ee20953d4089dc0f946bd.scope. Sep 13 00:08:27.260969 env[1215]: time="2025-09-13T00:08:27.260922057Z" level=info msg="StartContainer for \"abe6d49b540df9811a3aad068d1f96c340da27a17f9ee20953d4089dc0f946bd\" returns successfully" Sep 13 00:08:27.267470 systemd[1]: cri-containerd-abe6d49b540df9811a3aad068d1f96c340da27a17f9ee20953d4089dc0f946bd.scope: Deactivated successfully. Sep 13 00:08:27.297992 env[1215]: time="2025-09-13T00:08:27.297929883Z" level=info msg="shim disconnected" id=abe6d49b540df9811a3aad068d1f96c340da27a17f9ee20953d4089dc0f946bd Sep 13 00:08:27.297992 env[1215]: time="2025-09-13T00:08:27.297974605Z" level=warning msg="cleaning up after shim disconnected" id=abe6d49b540df9811a3aad068d1f96c340da27a17f9ee20953d4089dc0f946bd namespace=k8s.io Sep 13 00:08:27.297992 env[1215]: time="2025-09-13T00:08:27.297983845Z" level=info msg="cleaning up dead shim" Sep 13 00:08:27.304582 env[1215]: time="2025-09-13T00:08:27.304544658Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3131 runtime=io.containerd.runc.v2\n" Sep 13 00:08:27.496518 kubelet[1420]: E0913 00:08:27.495992 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:27.642410 kubelet[1420]: I0913 00:08:27.642363 1420 setters.go:618] "Node became not ready" node="10.0.0.29" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:08:27Z","lastTransitionTime":"2025-09-13T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:08:27.678407 env[1215]: time="2025-09-13T00:08:27.678348102Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:27.679841 env[1215]: time="2025-09-13T00:08:27.679796438Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:27.681561 env[1215]: time="2025-09-13T00:08:27.681530385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:08:27.682013 env[1215]: time="2025-09-13T00:08:27.681983442Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 13 00:08:27.685494 env[1215]: time="2025-09-13T00:08:27.685457616Z" level=info msg="CreateContainer within sandbox \"3852fd8270419e574fb3f790df6c6f9bd8a781b35fd9005277e0de349174717f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:08:27.695185 env[1215]: time="2025-09-13T00:08:27.695132789Z" level=info msg="CreateContainer within sandbox \"3852fd8270419e574fb3f790df6c6f9bd8a781b35fd9005277e0de349174717f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c60dd11513de63a11ef5da4df46e0fe3bf5404146aaa9188b3eb6e1677b8e7f4\"" Sep 13 00:08:27.695678 env[1215]: time="2025-09-13T00:08:27.695655129Z" level=info msg="StartContainer for \"c60dd11513de63a11ef5da4df46e0fe3bf5404146aaa9188b3eb6e1677b8e7f4\"" Sep 13 00:08:27.710121 systemd[1]: Started cri-containerd-c60dd11513de63a11ef5da4df46e0fe3bf5404146aaa9188b3eb6e1677b8e7f4.scope. Sep 13 00:08:27.739892 env[1215]: time="2025-09-13T00:08:27.739154125Z" level=info msg="StartContainer for \"c60dd11513de63a11ef5da4df46e0fe3bf5404146aaa9188b3eb6e1677b8e7f4\" returns successfully" Sep 13 00:08:27.778330 kubelet[1420]: E0913 00:08:27.778184 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:27.779955 kubelet[1420]: E0913 00:08:27.779932 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:27.783699 env[1215]: time="2025-09-13T00:08:27.783663960Z" level=info msg="CreateContainer within sandbox \"24a2b86cc3e6f6bde6977aa9460ce2a4e8e10f0ccd5e5ea9435b32fa45e30b6c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:08:27.793734 kubelet[1420]: I0913 00:08:27.793678 1420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-9qc68" podStartSLOduration=1.03207443 podStartE2EDuration="2.793664986s" podCreationTimestamp="2025-09-13 00:08:25 +0000 UTC" firstStartedPulling="2025-09-13 00:08:25.921133555 +0000 UTC m=+50.482728882" lastFinishedPulling="2025-09-13 00:08:27.682724111 +0000 UTC m=+52.244319438" observedRunningTime="2025-09-13 00:08:27.793283011 +0000 UTC m=+52.354878378" watchObservedRunningTime="2025-09-13 00:08:27.793664986 +0000 UTC m=+52.355260313" Sep 13 00:08:27.802281 env[1215]: time="2025-09-13T00:08:27.802230156Z" level=info msg="CreateContainer within sandbox \"24a2b86cc3e6f6bde6977aa9460ce2a4e8e10f0ccd5e5ea9435b32fa45e30b6c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bce1c51bb8bf35cfa184bc3a63b35668b12ef302bc2eaa73bd5d82eee2eed4bf\"" Sep 13 00:08:27.803736 env[1215]: time="2025-09-13T00:08:27.803566247Z" level=info msg="StartContainer for \"bce1c51bb8bf35cfa184bc3a63b35668b12ef302bc2eaa73bd5d82eee2eed4bf\"" Sep 13 00:08:27.832860 systemd[1]: Started cri-containerd-bce1c51bb8bf35cfa184bc3a63b35668b12ef302bc2eaa73bd5d82eee2eed4bf.scope. Sep 13 00:08:27.868432 env[1215]: time="2025-09-13T00:08:27.868366784Z" level=info msg="StartContainer for \"bce1c51bb8bf35cfa184bc3a63b35668b12ef302bc2eaa73bd5d82eee2eed4bf\" returns successfully" Sep 13 00:08:27.875605 systemd[1]: cri-containerd-bce1c51bb8bf35cfa184bc3a63b35668b12ef302bc2eaa73bd5d82eee2eed4bf.scope: Deactivated successfully. Sep 13 00:08:27.958903 env[1215]: time="2025-09-13T00:08:27.958835271Z" level=info msg="shim disconnected" id=bce1c51bb8bf35cfa184bc3a63b35668b12ef302bc2eaa73bd5d82eee2eed4bf Sep 13 00:08:27.958903 env[1215]: time="2025-09-13T00:08:27.958899473Z" level=warning msg="cleaning up after shim disconnected" id=bce1c51bb8bf35cfa184bc3a63b35668b12ef302bc2eaa73bd5d82eee2eed4bf namespace=k8s.io Sep 13 00:08:27.958903 env[1215]: time="2025-09-13T00:08:27.958909233Z" level=info msg="cleaning up dead shim" Sep 13 00:08:27.965384 env[1215]: time="2025-09-13T00:08:27.965340481Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3234 runtime=io.containerd.runc.v2\n" Sep 13 00:08:28.496550 kubelet[1420]: E0913 00:08:28.496503 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:28.650758 kubelet[1420]: I0913 00:08:28.650723 1420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="872a5cce-6fba-45c4-b0e8-df80aca3b80a" path="/var/lib/kubelet/pods/872a5cce-6fba-45c4-b0e8-df80aca3b80a/volumes" Sep 13 00:08:28.767586 systemd[1]: run-containerd-runc-k8s.io-bce1c51bb8bf35cfa184bc3a63b35668b12ef302bc2eaa73bd5d82eee2eed4bf-runc.erxu7F.mount: Deactivated successfully. Sep 13 00:08:28.767693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bce1c51bb8bf35cfa184bc3a63b35668b12ef302bc2eaa73bd5d82eee2eed4bf-rootfs.mount: Deactivated successfully. Sep 13 00:08:28.782840 kubelet[1420]: E0913 00:08:28.782811 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:28.783037 kubelet[1420]: E0913 00:08:28.782923 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:28.786160 env[1215]: time="2025-09-13T00:08:28.786114188Z" level=info msg="CreateContainer within sandbox \"24a2b86cc3e6f6bde6977aa9460ce2a4e8e10f0ccd5e5ea9435b32fa45e30b6c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:08:28.798530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3484007392.mount: Deactivated successfully. Sep 13 00:08:28.803954 env[1215]: time="2025-09-13T00:08:28.803905727Z" level=info msg="CreateContainer within sandbox \"24a2b86cc3e6f6bde6977aa9460ce2a4e8e10f0ccd5e5ea9435b32fa45e30b6c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"37027eee44a625ba658866bae54da8bd5552a05e1529195f45d6f5da20261b97\"" Sep 13 00:08:28.805687 env[1215]: time="2025-09-13T00:08:28.805644071Z" level=info msg="StartContainer for \"37027eee44a625ba658866bae54da8bd5552a05e1529195f45d6f5da20261b97\"" Sep 13 00:08:28.823336 systemd[1]: Started cri-containerd-37027eee44a625ba658866bae54da8bd5552a05e1529195f45d6f5da20261b97.scope. Sep 13 00:08:28.856319 systemd[1]: cri-containerd-37027eee44a625ba658866bae54da8bd5552a05e1529195f45d6f5da20261b97.scope: Deactivated successfully. Sep 13 00:08:28.856902 env[1215]: time="2025-09-13T00:08:28.856866209Z" level=info msg="StartContainer for \"37027eee44a625ba658866bae54da8bd5552a05e1529195f45d6f5da20261b97\" returns successfully" Sep 13 00:08:28.875088 env[1215]: time="2025-09-13T00:08:28.875023402Z" level=info msg="shim disconnected" id=37027eee44a625ba658866bae54da8bd5552a05e1529195f45d6f5da20261b97 Sep 13 00:08:28.875088 env[1215]: time="2025-09-13T00:08:28.875073204Z" level=warning msg="cleaning up after shim disconnected" id=37027eee44a625ba658866bae54da8bd5552a05e1529195f45d6f5da20261b97 namespace=k8s.io Sep 13 00:08:28.875088 env[1215]: time="2025-09-13T00:08:28.875081444Z" level=info msg="cleaning up dead shim" Sep 13 00:08:28.882551 env[1215]: time="2025-09-13T00:08:28.882507840Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3293 runtime=io.containerd.runc.v2\n" Sep 13 00:08:29.497720 kubelet[1420]: E0913 00:08:29.497577 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:29.789552 kubelet[1420]: E0913 00:08:29.789278 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:29.793693 env[1215]: time="2025-09-13T00:08:29.793633621Z" level=info msg="CreateContainer within sandbox \"24a2b86cc3e6f6bde6977aa9460ce2a4e8e10f0ccd5e5ea9435b32fa45e30b6c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:08:29.814729 env[1215]: time="2025-09-13T00:08:29.814682772Z" level=info msg="CreateContainer within sandbox \"24a2b86cc3e6f6bde6977aa9460ce2a4e8e10f0ccd5e5ea9435b32fa45e30b6c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"34a550bb9963960873776b222610d44a28eea6f9bcf1f4bb9032e0b22f29d9bf\"" Sep 13 00:08:29.816575 env[1215]: time="2025-09-13T00:08:29.816515198Z" level=info msg="StartContainer for \"34a550bb9963960873776b222610d44a28eea6f9bcf1f4bb9032e0b22f29d9bf\"" Sep 13 00:08:29.833326 systemd[1]: Started cri-containerd-34a550bb9963960873776b222610d44a28eea6f9bcf1f4bb9032e0b22f29d9bf.scope. Sep 13 00:08:29.866362 systemd[1]: cri-containerd-34a550bb9963960873776b222610d44a28eea6f9bcf1f4bb9032e0b22f29d9bf.scope: Deactivated successfully. Sep 13 00:08:29.868455 env[1215]: time="2025-09-13T00:08:29.868412689Z" level=info msg="StartContainer for \"34a550bb9963960873776b222610d44a28eea6f9bcf1f4bb9032e0b22f29d9bf\" returns successfully" Sep 13 00:08:29.889602 env[1215]: time="2025-09-13T00:08:29.889557923Z" level=info msg="shim disconnected" id=34a550bb9963960873776b222610d44a28eea6f9bcf1f4bb9032e0b22f29d9bf Sep 13 00:08:29.889602 env[1215]: time="2025-09-13T00:08:29.889601644Z" level=warning msg="cleaning up after shim disconnected" id=34a550bb9963960873776b222610d44a28eea6f9bcf1f4bb9032e0b22f29d9bf namespace=k8s.io Sep 13 00:08:29.889837 env[1215]: time="2025-09-13T00:08:29.889611525Z" level=info msg="cleaning up dead shim" Sep 13 00:08:29.898500 env[1215]: time="2025-09-13T00:08:29.898436520Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3351 runtime=io.containerd.runc.v2\n" Sep 13 00:08:30.498790 kubelet[1420]: E0913 00:08:30.498677 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:30.767801 systemd[1]: run-containerd-runc-k8s.io-34a550bb9963960873776b222610d44a28eea6f9bcf1f4bb9032e0b22f29d9bf-runc.Qjom9l.mount: Deactivated successfully. Sep 13 00:08:30.767915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34a550bb9963960873776b222610d44a28eea6f9bcf1f4bb9032e0b22f29d9bf-rootfs.mount: Deactivated successfully. Sep 13 00:08:30.794623 kubelet[1420]: E0913 00:08:30.794557 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:30.799649 env[1215]: time="2025-09-13T00:08:30.799561788Z" level=info msg="CreateContainer within sandbox \"24a2b86cc3e6f6bde6977aa9460ce2a4e8e10f0ccd5e5ea9435b32fa45e30b6c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:08:30.820354 env[1215]: time="2025-09-13T00:08:30.820296781Z" level=info msg="CreateContainer within sandbox \"24a2b86cc3e6f6bde6977aa9460ce2a4e8e10f0ccd5e5ea9435b32fa45e30b6c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dc1076c2d2feca223ad66dc3b05bd6d889e0bb9504f6ce755b92120ba8c3d7da\"" Sep 13 00:08:30.821006 env[1215]: time="2025-09-13T00:08:30.820970964Z" level=info msg="StartContainer for \"dc1076c2d2feca223ad66dc3b05bd6d889e0bb9504f6ce755b92120ba8c3d7da\"" Sep 13 00:08:30.844936 systemd[1]: Started cri-containerd-dc1076c2d2feca223ad66dc3b05bd6d889e0bb9504f6ce755b92120ba8c3d7da.scope. Sep 13 00:08:30.888533 env[1215]: time="2025-09-13T00:08:30.888381961Z" level=info msg="StartContainer for \"dc1076c2d2feca223ad66dc3b05bd6d889e0bb9504f6ce755b92120ba8c3d7da\" returns successfully" Sep 13 00:08:31.187875 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 13 00:08:31.499603 kubelet[1420]: E0913 00:08:31.499474 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:31.767955 systemd[1]: run-containerd-runc-k8s.io-dc1076c2d2feca223ad66dc3b05bd6d889e0bb9504f6ce755b92120ba8c3d7da-runc.WgPX9i.mount: Deactivated successfully. Sep 13 00:08:31.802456 kubelet[1420]: E0913 00:08:31.801546 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:32.041602 systemd[1]: run-containerd-runc-k8s.io-dc1076c2d2feca223ad66dc3b05bd6d889e0bb9504f6ce755b92120ba8c3d7da-runc.hvJtT6.mount: Deactivated successfully. Sep 13 00:08:32.500403 kubelet[1420]: E0913 00:08:32.500256 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:33.134135 kubelet[1420]: E0913 00:08:33.134104 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:33.500642 kubelet[1420]: E0913 00:08:33.500546 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:34.106720 systemd-networkd[1043]: lxc_health: Link UP Sep 13 00:08:34.115786 systemd-networkd[1043]: lxc_health: Gained carrier Sep 13 00:08:34.115972 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:08:34.500938 kubelet[1420]: E0913 00:08:34.500800 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:35.135324 kubelet[1420]: E0913 00:08:35.135294 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:35.156649 kubelet[1420]: I0913 00:08:35.156588 1420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4dptq" podStartSLOduration=9.156570398 podStartE2EDuration="9.156570398s" podCreationTimestamp="2025-09-13 00:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:08:31.829272006 +0000 UTC m=+56.390867333" watchObservedRunningTime="2025-09-13 00:08:35.156570398 +0000 UTC m=+59.718165725" Sep 13 00:08:35.502124 kubelet[1420]: E0913 00:08:35.502012 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:35.566179 systemd-networkd[1043]: lxc_health: Gained IPv6LL Sep 13 00:08:35.807902 kubelet[1420]: E0913 00:08:35.807871 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:36.303633 systemd[1]: run-containerd-runc-k8s.io-dc1076c2d2feca223ad66dc3b05bd6d889e0bb9504f6ce755b92120ba8c3d7da-runc.0Yfhfl.mount: Deactivated successfully. Sep 13 00:08:36.459586 kubelet[1420]: E0913 00:08:36.459529 1420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:36.474361 env[1215]: time="2025-09-13T00:08:36.474106650Z" level=info msg="StopPodSandbox for \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\"" Sep 13 00:08:36.474361 env[1215]: time="2025-09-13T00:08:36.474195773Z" level=info msg="TearDown network for sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" successfully" Sep 13 00:08:36.474361 env[1215]: time="2025-09-13T00:08:36.474228934Z" level=info msg="StopPodSandbox for \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" returns successfully" Sep 13 00:08:36.474721 env[1215]: time="2025-09-13T00:08:36.474639425Z" level=info msg="RemovePodSandbox for \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\"" Sep 13 00:08:36.474721 env[1215]: time="2025-09-13T00:08:36.474665586Z" level=info msg="Forcibly stopping sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\"" Sep 13 00:08:36.474776 env[1215]: time="2025-09-13T00:08:36.474735228Z" level=info msg="TearDown network for sandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" successfully" Sep 13 00:08:36.481066 env[1215]: time="2025-09-13T00:08:36.481016725Z" level=info msg="RemovePodSandbox \"0c5efe30ad36a26713a113c6212528d20c67363a965b907de954f1cb28d1a08a\" returns successfully" Sep 13 00:08:36.503063 kubelet[1420]: E0913 00:08:36.503030 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:36.809471 kubelet[1420]: E0913 00:08:36.809412 1420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:37.504035 kubelet[1420]: E0913 00:08:37.503986 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:38.440583 systemd[1]: run-containerd-runc-k8s.io-dc1076c2d2feca223ad66dc3b05bd6d889e0bb9504f6ce755b92120ba8c3d7da-runc.Wi44js.mount: Deactivated successfully. Sep 13 00:08:38.504136 kubelet[1420]: E0913 00:08:38.504095 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:39.504987 kubelet[1420]: E0913 00:08:39.504920 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:40.505088 kubelet[1420]: E0913 00:08:40.505038 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:40.589212 systemd[1]: run-containerd-runc-k8s.io-dc1076c2d2feca223ad66dc3b05bd6d889e0bb9504f6ce755b92120ba8c3d7da-runc.Zhieu2.mount: Deactivated successfully. Sep 13 00:08:40.650242 kubelet[1420]: E0913 00:08:40.650210 1420 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52908->127.0.0.1:37743: write tcp 127.0.0.1:52908->127.0.0.1:37743: write: broken pipe Sep 13 00:08:41.506038 kubelet[1420]: E0913 00:08:41.505998 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:08:42.506456 kubelet[1420]: E0913 00:08:42.506413 1420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"