Sep 13 00:06:24.688533 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:06:24.688551 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 12 23:05:37 -00 2025 Sep 13 00:06:24.688559 kernel: efi: EFI v2.70 by EDK II Sep 13 00:06:24.688565 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 13 00:06:24.688570 kernel: random: crng init done Sep 13 00:06:24.688575 kernel: ACPI: Early table checksum verification disabled Sep 13 00:06:24.688581 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 13 00:06:24.688588 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:06:24.688594 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:24.688599 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:24.688604 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:24.688610 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:24.688615 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:24.688620 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:24.688629 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:24.688634 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:24.688641 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:24.688646 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 13 00:06:24.688652 kernel: NUMA: Failed to initialise from firmware Sep 13 00:06:24.688658 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:06:24.688663 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Sep 13 00:06:24.688669 kernel: Zone ranges: Sep 13 00:06:24.688674 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:06:24.688681 kernel: DMA32 empty Sep 13 00:06:24.688686 kernel: Normal empty Sep 13 00:06:24.688692 kernel: Movable zone start for each node Sep 13 00:06:24.688697 kernel: Early memory node ranges Sep 13 00:06:24.688703 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 13 00:06:24.688708 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 13 00:06:24.688714 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 13 00:06:24.688720 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 13 00:06:24.688726 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 13 00:06:24.688731 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 13 00:06:24.688737 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 13 00:06:24.688742 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:06:24.688749 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 13 00:06:24.688755 kernel: psci: probing for conduit method from ACPI. Sep 13 00:06:24.688760 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:06:24.688765 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:06:24.688771 kernel: psci: Trusted OS migration not required Sep 13 00:06:24.688779 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:06:24.688785 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 00:06:24.688792 kernel: ACPI: SRAT not present Sep 13 00:06:24.688799 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Sep 13 00:06:24.688805 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Sep 13 00:06:24.688811 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 13 00:06:24.688817 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:06:24.688823 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:06:24.688830 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:06:24.688836 kernel: CPU features: detected: Spectre-v4 Sep 13 00:06:24.688842 kernel: CPU features: detected: Spectre-BHB Sep 13 00:06:24.688859 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:06:24.688865 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:06:24.688871 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:06:24.688877 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:06:24.688883 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 13 00:06:24.688889 kernel: Policy zone: DMA Sep 13 00:06:24.688896 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:06:24.688902 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:06:24.688908 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:06:24.688914 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:06:24.688920 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:06:24.688928 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Sep 13 00:06:24.688934 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:06:24.688940 kernel: trace event string verifier disabled Sep 13 00:06:24.688946 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:06:24.688952 kernel: rcu: RCU event tracing is enabled. Sep 13 00:06:24.688962 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:06:24.688969 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:06:24.688976 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:06:24.688982 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:06:24.688988 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:06:24.688994 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:06:24.689001 kernel: GICv3: 256 SPIs implemented Sep 13 00:06:24.689007 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:06:24.689013 kernel: GICv3: Distributor has no Range Selector support Sep 13 00:06:24.689019 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:06:24.689025 kernel: GICv3: 16 PPIs implemented Sep 13 00:06:24.689031 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 00:06:24.689037 kernel: ACPI: SRAT not present Sep 13 00:06:24.689042 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 00:06:24.689048 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:06:24.689055 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:06:24.689060 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 13 00:06:24.689067 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 13 00:06:24.689074 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:06:24.689080 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:06:24.689086 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:06:24.689092 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:06:24.689098 kernel: arm-pv: using stolen time PV Sep 13 00:06:24.689104 kernel: Console: colour dummy device 80x25 Sep 13 00:06:24.689111 kernel: ACPI: Core revision 20210730 Sep 13 00:06:24.689117 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:06:24.689123 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:06:24.689130 kernel: LSM: Security Framework initializing Sep 13 00:06:24.689137 kernel: SELinux: Initializing. Sep 13 00:06:24.689143 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:06:24.689149 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:06:24.689155 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:06:24.689161 kernel: Platform MSI: ITS@0x8080000 domain created Sep 13 00:06:24.689167 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 13 00:06:24.689174 kernel: Remapping and enabling EFI services. Sep 13 00:06:24.689180 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:06:24.689186 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:06:24.689193 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 00:06:24.689200 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 13 00:06:24.689206 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:06:24.689212 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:06:24.689219 kernel: Detected PIPT I-cache on CPU2 Sep 13 00:06:24.689225 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 13 00:06:24.689231 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 13 00:06:24.689238 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:06:24.689244 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 13 00:06:24.689250 kernel: Detected PIPT I-cache on CPU3 Sep 13 00:06:24.689257 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 13 00:06:24.689264 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 13 00:06:24.689270 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:06:24.689276 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 13 00:06:24.689287 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:06:24.689295 kernel: SMP: Total of 4 processors activated. Sep 13 00:06:24.689302 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:06:24.689308 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:06:24.689315 kernel: CPU features: detected: Common not Private translations Sep 13 00:06:24.689321 kernel: CPU features: detected: CRC32 instructions Sep 13 00:06:24.689328 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:06:24.689334 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:06:24.689345 kernel: CPU features: detected: Privileged Access Never Sep 13 00:06:24.689352 kernel: CPU features: detected: RAS Extension Support Sep 13 00:06:24.689358 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 00:06:24.689365 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:06:24.689371 kernel: alternatives: patching kernel code Sep 13 00:06:24.689379 kernel: devtmpfs: initialized Sep 13 00:06:24.689385 kernel: KASLR enabled Sep 13 00:06:24.689392 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:06:24.689399 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:06:24.689405 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:06:24.689411 kernel: SMBIOS 3.0.0 present. Sep 13 00:06:24.689418 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 13 00:06:24.689424 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:06:24.689431 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:06:24.689439 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:06:24.689446 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:06:24.689452 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:06:24.689459 kernel: audit: type=2000 audit(0.035:1): state=initialized audit_enabled=0 res=1 Sep 13 00:06:24.689472 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:06:24.689479 kernel: cpuidle: using governor menu Sep 13 00:06:24.689485 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:06:24.689492 kernel: ASID allocator initialised with 32768 entries Sep 13 00:06:24.689498 kernel: ACPI: bus type PCI registered Sep 13 00:06:24.689506 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:06:24.689513 kernel: Serial: AMBA PL011 UART driver Sep 13 00:06:24.689519 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:06:24.689526 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:06:24.689532 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:06:24.689539 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:06:24.689545 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:06:24.689552 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:06:24.689558 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:06:24.689566 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:06:24.689572 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:06:24.689579 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:06:24.689586 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:06:24.689592 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:06:24.689599 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:06:24.689605 kernel: ACPI: Interpreter enabled Sep 13 00:06:24.689612 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:06:24.689618 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:06:24.689626 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:06:24.689632 kernel: printk: console [ttyAMA0] enabled Sep 13 00:06:24.689639 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:06:24.689756 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:06:24.689819 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:06:24.689886 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:06:24.689945 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 00:06:24.690004 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 00:06:24.690013 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 00:06:24.690019 kernel: PCI host bridge to bus 0000:00 Sep 13 00:06:24.690082 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 00:06:24.690134 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:06:24.690185 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 00:06:24.690235 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:06:24.690305 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 13 00:06:24.690377 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:06:24.690439 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 13 00:06:24.690528 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 13 00:06:24.690596 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:06:24.690669 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:06:24.690731 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 13 00:06:24.690804 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 13 00:06:24.690930 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 00:06:24.690998 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:06:24.691061 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 00:06:24.691070 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:06:24.691077 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:06:24.691083 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:06:24.691093 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:06:24.691099 kernel: iommu: Default domain type: Translated Sep 13 00:06:24.691106 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:06:24.691112 kernel: vgaarb: loaded Sep 13 00:06:24.691119 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:06:24.691126 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:06:24.691132 kernel: PTP clock support registered Sep 13 00:06:24.691139 kernel: Registered efivars operations Sep 13 00:06:24.691145 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:06:24.691152 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:06:24.691160 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:06:24.691166 kernel: pnp: PnP ACPI init Sep 13 00:06:24.691231 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 00:06:24.691241 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:06:24.691247 kernel: NET: Registered PF_INET protocol family Sep 13 00:06:24.691254 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:06:24.691261 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:06:24.691267 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:06:24.691276 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:06:24.691282 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:06:24.691289 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:06:24.691295 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:06:24.691302 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:06:24.691308 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:06:24.691315 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:06:24.691321 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 13 00:06:24.691328 kernel: kvm [1]: HYP mode not available Sep 13 00:06:24.691336 kernel: Initialise system trusted keyrings Sep 13 00:06:24.691346 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:06:24.691353 kernel: Key type asymmetric registered Sep 13 00:06:24.691359 kernel: Asymmetric key parser 'x509' registered Sep 13 00:06:24.691366 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:06:24.691373 kernel: io scheduler mq-deadline registered Sep 13 00:06:24.691379 kernel: io scheduler kyber registered Sep 13 00:06:24.691386 kernel: io scheduler bfq registered Sep 13 00:06:24.691392 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:06:24.691400 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:06:24.691407 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:06:24.691479 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 13 00:06:24.691488 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:06:24.691495 kernel: thunder_xcv, ver 1.0 Sep 13 00:06:24.691501 kernel: thunder_bgx, ver 1.0 Sep 13 00:06:24.691508 kernel: nicpf, ver 1.0 Sep 13 00:06:24.691514 kernel: nicvf, ver 1.0 Sep 13 00:06:24.691586 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:06:24.691646 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:06:24 UTC (1757721984) Sep 13 00:06:24.691655 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:06:24.691662 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:06:24.691668 kernel: Segment Routing with IPv6 Sep 13 00:06:24.691675 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:06:24.691681 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:06:24.691688 kernel: Key type dns_resolver registered Sep 13 00:06:24.691694 kernel: registered taskstats version 1 Sep 13 00:06:24.691702 kernel: Loading compiled-in X.509 certificates Sep 13 00:06:24.691709 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 47ac98e9306f36eebe4291d409359a5a5d0c2b9c' Sep 13 00:06:24.691715 kernel: Key type .fscrypt registered Sep 13 00:06:24.691722 kernel: Key type fscrypt-provisioning registered Sep 13 00:06:24.691728 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:06:24.691735 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:06:24.691741 kernel: ima: No architecture policies found Sep 13 00:06:24.691748 kernel: clk: Disabling unused clocks Sep 13 00:06:24.691754 kernel: Freeing unused kernel memory: 36416K Sep 13 00:06:24.691762 kernel: Run /init as init process Sep 13 00:06:24.691768 kernel: with arguments: Sep 13 00:06:24.691775 kernel: /init Sep 13 00:06:24.691781 kernel: with environment: Sep 13 00:06:24.691787 kernel: HOME=/ Sep 13 00:06:24.691793 kernel: TERM=linux Sep 13 00:06:24.691800 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:06:24.691808 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:06:24.691818 systemd[1]: Detected virtualization kvm. Sep 13 00:06:24.691825 systemd[1]: Detected architecture arm64. Sep 13 00:06:24.691832 systemd[1]: Running in initrd. Sep 13 00:06:24.691838 systemd[1]: No hostname configured, using default hostname. Sep 13 00:06:24.691849 systemd[1]: Hostname set to . Sep 13 00:06:24.691863 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:06:24.691870 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:06:24.691877 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:06:24.691886 systemd[1]: Reached target cryptsetup.target. Sep 13 00:06:24.691893 systemd[1]: Reached target paths.target. Sep 13 00:06:24.691900 systemd[1]: Reached target slices.target. Sep 13 00:06:24.691906 systemd[1]: Reached target swap.target. Sep 13 00:06:24.691913 systemd[1]: Reached target timers.target. Sep 13 00:06:24.691920 systemd[1]: Listening on iscsid.socket. Sep 13 00:06:24.691928 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:06:24.691936 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:06:24.691944 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:06:24.691951 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:06:24.691958 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:06:24.691965 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:06:24.691972 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:06:24.691980 systemd[1]: Reached target sockets.target. Sep 13 00:06:24.691987 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:06:24.691993 systemd[1]: Finished network-cleanup.service. Sep 13 00:06:24.692002 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:06:24.692009 systemd[1]: Starting systemd-journald.service... Sep 13 00:06:24.692016 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:06:24.692023 systemd[1]: Starting systemd-resolved.service... Sep 13 00:06:24.692030 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:06:24.692037 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:06:24.692044 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:06:24.692051 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:06:24.692058 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:06:24.692066 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:06:24.692076 systemd-journald[290]: Journal started Sep 13 00:06:24.692115 systemd-journald[290]: Runtime Journal (/run/log/journal/819d0cafb87640a38a14d6d066fad056) is 6.0M, max 48.7M, 42.6M free. Sep 13 00:06:24.691034 systemd-modules-load[291]: Inserted module 'overlay' Sep 13 00:06:24.695692 systemd[1]: Started systemd-journald.service. Sep 13 00:06:24.695711 kernel: audit: type=1130 audit(1757721984.688:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.700099 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:06:24.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.706569 kernel: audit: type=1130 audit(1757721984.699:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.706601 kernel: audit: type=1130 audit(1757721984.700:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.713120 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:06:24.713863 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:06:24.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.715563 systemd-resolved[292]: Positive Trust Anchors: Sep 13 00:06:24.717744 kernel: audit: type=1130 audit(1757721984.714:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.717761 kernel: Bridge firewalling registered Sep 13 00:06:24.715577 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:06:24.715605 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:06:24.717720 systemd-modules-load[291]: Inserted module 'br_netfilter' Sep 13 00:06:24.719300 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:06:24.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.719887 systemd-resolved[292]: Defaulting to hostname 'linux'. Sep 13 00:06:24.730503 kernel: audit: type=1130 audit(1757721984.725:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.724628 systemd[1]: Started systemd-resolved.service. Sep 13 00:06:24.725880 systemd[1]: Reached target nss-lookup.target. Sep 13 00:06:24.732477 kernel: SCSI subsystem initialized Sep 13 00:06:24.733654 dracut-cmdline[308]: dracut-dracut-053 Sep 13 00:06:24.735858 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:06:24.741921 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:06:24.741952 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:06:24.741961 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:06:24.745018 systemd-modules-load[291]: Inserted module 'dm_multipath' Sep 13 00:06:24.746021 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:06:24.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.749503 kernel: audit: type=1130 audit(1757721984.746:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.749621 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:06:24.758246 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:06:24.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.761480 kernel: audit: type=1130 audit(1757721984.759:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.796493 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:06:24.808506 kernel: iscsi: registered transport (tcp) Sep 13 00:06:24.823818 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:06:24.823883 kernel: QLogic iSCSI HBA Driver Sep 13 00:06:24.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.859592 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:06:24.863553 kernel: audit: type=1130 audit(1757721984.859:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.861093 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:06:24.903506 kernel: raid6: neonx8 gen() 13327 MB/s Sep 13 00:06:24.920497 kernel: raid6: neonx8 xor() 10516 MB/s Sep 13 00:06:24.937657 kernel: raid6: neonx4 gen() 12776 MB/s Sep 13 00:06:24.954578 kernel: raid6: neonx4 xor() 10542 MB/s Sep 13 00:06:24.971619 kernel: raid6: neonx2 gen() 12552 MB/s Sep 13 00:06:24.988510 kernel: raid6: neonx2 xor() 10152 MB/s Sep 13 00:06:25.005507 kernel: raid6: neonx1 gen() 10360 MB/s Sep 13 00:06:25.022513 kernel: raid6: neonx1 xor() 8632 MB/s Sep 13 00:06:25.039551 kernel: raid6: int64x8 gen() 6183 MB/s Sep 13 00:06:25.056508 kernel: raid6: int64x8 xor() 3492 MB/s Sep 13 00:06:25.073510 kernel: raid6: int64x4 gen() 7108 MB/s Sep 13 00:06:25.090506 kernel: raid6: int64x4 xor() 3796 MB/s Sep 13 00:06:25.108019 kernel: raid6: int64x2 gen() 6104 MB/s Sep 13 00:06:25.124510 kernel: raid6: int64x2 xor() 3273 MB/s Sep 13 00:06:25.141517 kernel: raid6: int64x1 gen() 4967 MB/s Sep 13 00:06:25.158799 kernel: raid6: int64x1 xor() 2607 MB/s Sep 13 00:06:25.158866 kernel: raid6: using algorithm neonx8 gen() 13327 MB/s Sep 13 00:06:25.158877 kernel: raid6: .... xor() 10516 MB/s, rmw enabled Sep 13 00:06:25.158885 kernel: raid6: using neon recovery algorithm Sep 13 00:06:25.169500 kernel: xor: measuring software checksum speed Sep 13 00:06:25.169555 kernel: 8regs : 16168 MB/sec Sep 13 00:06:25.170537 kernel: 32regs : 19816 MB/sec Sep 13 00:06:25.170567 kernel: arm64_neon : 26306 MB/sec Sep 13 00:06:25.171499 kernel: xor: using function: arm64_neon (26306 MB/sec) Sep 13 00:06:25.225517 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 13 00:06:25.238074 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:06:25.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:25.239862 systemd[1]: Starting systemd-udevd.service... Sep 13 00:06:25.244922 kernel: audit: type=1130 audit(1757721985.238:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:25.238000 audit: BPF prog-id=7 op=LOAD Sep 13 00:06:25.238000 audit: BPF prog-id=8 op=LOAD Sep 13 00:06:25.256150 systemd-udevd[492]: Using default interface naming scheme 'v252'. Sep 13 00:06:25.260618 systemd[1]: Started systemd-udevd.service. Sep 13 00:06:25.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:25.262433 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:06:25.283363 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Sep 13 00:06:25.311654 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:06:25.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:25.313229 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:06:25.350016 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:06:25.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:25.393910 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:06:25.403638 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:06:25.403655 kernel: GPT:9289727 != 19775487 Sep 13 00:06:25.403664 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:06:25.403673 kernel: GPT:9289727 != 19775487 Sep 13 00:06:25.403681 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:06:25.403690 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:06:25.427496 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (560) Sep 13 00:06:25.429767 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:06:25.430528 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:06:25.436242 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:06:25.439347 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:06:25.442555 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:06:25.443968 systemd[1]: Starting disk-uuid.service... Sep 13 00:06:25.451124 disk-uuid[567]: Primary Header is updated. Sep 13 00:06:25.451124 disk-uuid[567]: Secondary Entries is updated. Sep 13 00:06:25.451124 disk-uuid[567]: Secondary Header is updated. Sep 13 00:06:25.454541 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:06:26.463334 disk-uuid[568]: The operation has completed successfully. Sep 13 00:06:26.464360 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:06:26.489018 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:06:26.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.489108 systemd[1]: Finished disk-uuid.service. Sep 13 00:06:26.490587 systemd[1]: Starting verity-setup.service... Sep 13 00:06:26.505533 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:06:26.524939 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:06:26.527012 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:06:26.528589 systemd[1]: Finished verity-setup.service. Sep 13 00:06:26.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.572502 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:06:26.572739 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:06:26.573391 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:06:26.574077 systemd[1]: Starting ignition-setup.service... Sep 13 00:06:26.576257 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:06:26.585535 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:06:26.585569 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:06:26.585579 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:06:26.595063 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:06:26.605237 systemd[1]: Finished ignition-setup.service. Sep 13 00:06:26.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.606642 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:06:26.662041 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:06:26.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.662000 audit: BPF prog-id=9 op=LOAD Sep 13 00:06:26.663970 systemd[1]: Starting systemd-networkd.service... Sep 13 00:06:26.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.683244 systemd-networkd[745]: lo: Link UP Sep 13 00:06:26.683254 systemd-networkd[745]: lo: Gained carrier Sep 13 00:06:26.683739 systemd-networkd[745]: Enumeration completed Sep 13 00:06:26.683930 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:06:26.684021 systemd[1]: Started systemd-networkd.service. Sep 13 00:06:26.684797 systemd-networkd[745]: eth0: Link UP Sep 13 00:06:26.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.684800 systemd-networkd[745]: eth0: Gained carrier Sep 13 00:06:26.684969 systemd[1]: Reached target network.target. Sep 13 00:06:26.686269 systemd[1]: Starting iscsiuio.service... Sep 13 00:06:26.693374 systemd[1]: Started iscsiuio.service. Sep 13 00:06:26.696546 systemd[1]: Starting iscsid.service... Sep 13 00:06:26.699936 iscsid[750]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:06:26.699936 iscsid[750]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:06:26.699936 iscsid[750]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:06:26.699936 iscsid[750]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:06:26.699936 iscsid[750]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:06:26.699936 iscsid[750]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:06:26.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.700238 ignition[666]: Ignition 2.14.0 Sep 13 00:06:26.702795 systemd[1]: Started iscsid.service. Sep 13 00:06:26.700245 ignition[666]: Stage: fetch-offline Sep 13 00:06:26.706364 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:06:26.700284 ignition[666]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:26.707749 systemd-networkd[745]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:06:26.700293 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:06:26.700513 ignition[666]: parsed url from cmdline: "" Sep 13 00:06:26.700517 ignition[666]: no config URL provided Sep 13 00:06:26.700522 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:06:26.700532 ignition[666]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:06:26.700551 ignition[666]: op(1): [started] loading QEMU firmware config module Sep 13 00:06:26.700555 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:06:26.712502 ignition[666]: op(1): [finished] loading QEMU firmware config module Sep 13 00:06:26.719694 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:06:26.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.720483 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:06:26.721698 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:06:26.723024 systemd[1]: Reached target remote-fs.target. Sep 13 00:06:26.724945 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:06:26.732256 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:06:26.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.761630 ignition[666]: parsing config with SHA512: b64848d79457e78733d6f33f7024ebf31ffd7b864b11515bb77c45d2adae456d9b08e32f0097c3d985fee8b2ca6d5790420ddfd50ca2bde4d2797cab0fdede26 Sep 13 00:06:26.769365 unknown[666]: fetched base config from "system" Sep 13 00:06:26.769389 unknown[666]: fetched user config from "qemu" Sep 13 00:06:26.770810 ignition[666]: fetch-offline: fetch-offline passed Sep 13 00:06:26.771935 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:06:26.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.770884 ignition[666]: Ignition finished successfully Sep 13 00:06:26.773221 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:06:26.773992 systemd[1]: Starting ignition-kargs.service... Sep 13 00:06:26.782382 ignition[766]: Ignition 2.14.0 Sep 13 00:06:26.782392 ignition[766]: Stage: kargs Sep 13 00:06:26.782507 ignition[766]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:26.784669 systemd[1]: Finished ignition-kargs.service. Sep 13 00:06:26.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.782517 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:06:26.783409 ignition[766]: kargs: kargs passed Sep 13 00:06:26.786700 systemd[1]: Starting ignition-disks.service... Sep 13 00:06:26.783447 ignition[766]: Ignition finished successfully Sep 13 00:06:26.792896 ignition[772]: Ignition 2.14.0 Sep 13 00:06:26.792905 ignition[772]: Stage: disks Sep 13 00:06:26.793002 ignition[772]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:26.794856 systemd[1]: Finished ignition-disks.service. Sep 13 00:06:26.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.793012 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:06:26.796185 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:06:26.793814 ignition[772]: disks: disks passed Sep 13 00:06:26.797294 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:06:26.793865 ignition[772]: Ignition finished successfully Sep 13 00:06:26.798804 systemd[1]: Reached target local-fs.target. Sep 13 00:06:26.799999 systemd[1]: Reached target sysinit.target. Sep 13 00:06:26.801009 systemd[1]: Reached target basic.target. Sep 13 00:06:26.802921 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:06:26.813280 systemd-fsck[780]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 13 00:06:26.817630 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:06:26.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.819062 systemd[1]: Mounting sysroot.mount... Sep 13 00:06:26.824479 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:06:26.824667 systemd[1]: Mounted sysroot.mount. Sep 13 00:06:26.825231 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:06:26.827675 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:06:26.828380 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:06:26.828415 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:06:26.828438 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:06:26.830007 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:06:26.831562 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:06:26.835575 initrd-setup-root[790]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:06:26.838902 initrd-setup-root[798]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:06:26.842455 initrd-setup-root[806]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:06:26.845185 initrd-setup-root[814]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:06:26.869195 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:06:26.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.870530 systemd[1]: Starting ignition-mount.service... Sep 13 00:06:26.871638 systemd[1]: Starting sysroot-boot.service... Sep 13 00:06:26.875677 bash[831]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:06:26.883415 ignition[832]: INFO : Ignition 2.14.0 Sep 13 00:06:26.883415 ignition[832]: INFO : Stage: mount Sep 13 00:06:26.885374 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:26.885374 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:06:26.885374 ignition[832]: INFO : mount: mount passed Sep 13 00:06:26.885374 ignition[832]: INFO : Ignition finished successfully Sep 13 00:06:26.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:26.885527 systemd[1]: Finished ignition-mount.service. Sep 13 00:06:26.891139 systemd[1]: Finished sysroot-boot.service. Sep 13 00:06:26.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:27.535647 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:06:27.541501 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (842) Sep 13 00:06:27.542744 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:06:27.542762 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:06:27.542771 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:06:27.545911 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:06:27.547286 systemd[1]: Starting ignition-files.service... Sep 13 00:06:27.561025 ignition[862]: INFO : Ignition 2.14.0 Sep 13 00:06:27.561025 ignition[862]: INFO : Stage: files Sep 13 00:06:27.562228 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:27.562228 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:06:27.562228 ignition[862]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:06:27.565056 ignition[862]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:06:27.565056 ignition[862]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:06:27.567273 ignition[862]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:06:27.567273 ignition[862]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:06:27.567273 ignition[862]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:06:27.567273 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 13 00:06:27.567273 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 13 00:06:27.565687 unknown[862]: wrote ssh authorized keys file for user: core Sep 13 00:06:27.635112 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:06:28.125091 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 13 00:06:28.126689 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:06:28.126689 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 13 00:06:28.260635 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:06:28.373777 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:06:28.375254 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 13 00:06:28.629657 systemd-networkd[745]: eth0: Gained IPv6LL Sep 13 00:06:28.653634 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:06:29.182479 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:06:29.184296 ignition[862]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:06:29.184296 ignition[862]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:06:29.184296 ignition[862]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:06:29.184296 ignition[862]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:06:29.184296 ignition[862]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 00:06:29.184296 ignition[862]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:06:29.184296 ignition[862]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:06:29.184296 ignition[862]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 00:06:29.184296 ignition[862]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:06:29.184296 ignition[862]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:06:29.184296 ignition[862]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:06:29.184296 ignition[862]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:06:29.211861 ignition[862]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:06:29.213166 ignition[862]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:06:29.213166 ignition[862]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:06:29.213166 ignition[862]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:06:29.213166 ignition[862]: INFO : files: files passed Sep 13 00:06:29.213166 ignition[862]: INFO : Ignition finished successfully Sep 13 00:06:29.223205 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 00:06:29.223226 kernel: audit: type=1130 audit(1757721989.215:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.213279 systemd[1]: Finished ignition-files.service. Sep 13 00:06:29.216293 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:06:29.225272 initrd-setup-root-after-ignition[887]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 13 00:06:29.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.220059 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:06:29.231717 kernel: audit: type=1130 audit(1757721989.224:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.231733 kernel: audit: type=1131 audit(1757721989.224:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.231793 initrd-setup-root-after-ignition[889]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:29.235512 kernel: audit: type=1130 audit(1757721989.231:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.220848 systemd[1]: Starting ignition-quench.service... Sep 13 00:06:29.223611 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:06:29.223687 systemd[1]: Finished ignition-quench.service. Sep 13 00:06:29.228632 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:06:29.232461 systemd[1]: Reached target ignition-complete.target. Sep 13 00:06:29.236770 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:06:29.250735 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:06:29.250840 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:06:29.254496 kernel: audit: type=1130 audit(1757721989.251:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.254513 kernel: audit: type=1131 audit(1757721989.251:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.252213 systemd[1]: Reached target initrd-fs.target. Sep 13 00:06:29.256935 systemd[1]: Reached target initrd.target. Sep 13 00:06:29.257974 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:06:29.258756 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:06:29.269074 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:06:29.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.270593 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:06:29.273322 kernel: audit: type=1130 audit(1757721989.269:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.278618 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:06:29.279349 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:06:29.280449 systemd[1]: Stopped target timers.target. Sep 13 00:06:29.281537 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:06:29.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.281642 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:06:29.285980 kernel: audit: type=1131 audit(1757721989.282:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.282646 systemd[1]: Stopped target initrd.target. Sep 13 00:06:29.285595 systemd[1]: Stopped target basic.target. Sep 13 00:06:29.286538 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:06:29.287590 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:06:29.288578 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:06:29.289696 systemd[1]: Stopped target remote-fs.target. Sep 13 00:06:29.290739 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:06:29.291852 systemd[1]: Stopped target sysinit.target. Sep 13 00:06:29.292841 systemd[1]: Stopped target local-fs.target. Sep 13 00:06:29.293860 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:06:29.294874 systemd[1]: Stopped target swap.target. Sep 13 00:06:29.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.295811 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:06:29.300411 kernel: audit: type=1131 audit(1757721989.296:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.295916 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:06:29.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.296971 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:06:29.304600 kernel: audit: type=1131 audit(1757721989.300:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.299766 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:06:29.299874 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:06:29.301163 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:06:29.301259 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:06:29.304234 systemd[1]: Stopped target paths.target. Sep 13 00:06:29.305254 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:06:29.306531 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:06:29.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.308044 systemd[1]: Stopped target slices.target. Sep 13 00:06:29.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.309156 systemd[1]: Stopped target sockets.target. Sep 13 00:06:29.310103 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:06:29.314888 iscsid[750]: iscsid shutting down. Sep 13 00:06:29.310215 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:06:29.311458 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:06:29.311558 systemd[1]: Stopped ignition-files.service. Sep 13 00:06:29.313392 systemd[1]: Stopping ignition-mount.service... Sep 13 00:06:29.316369 systemd[1]: Stopping iscsid.service... Sep 13 00:06:29.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.320580 ignition[902]: INFO : Ignition 2.14.0 Sep 13 00:06:29.320580 ignition[902]: INFO : Stage: umount Sep 13 00:06:29.320580 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:29.320580 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:06:29.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.317343 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:06:29.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.326568 ignition[902]: INFO : umount: umount passed Sep 13 00:06:29.326568 ignition[902]: INFO : Ignition finished successfully Sep 13 00:06:29.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.317455 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:06:29.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.319301 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:06:29.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.320614 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:06:29.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.320755 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:06:29.322150 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:06:29.322248 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:06:29.324672 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:06:29.324765 systemd[1]: Stopped iscsid.service. Sep 13 00:06:29.326115 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:06:29.326191 systemd[1]: Stopped ignition-mount.service. Sep 13 00:06:29.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.327404 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:06:29.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.327496 systemd[1]: Closed iscsid.socket. Sep 13 00:06:29.328293 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:06:29.328330 systemd[1]: Stopped ignition-disks.service. Sep 13 00:06:29.329860 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:06:29.329903 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:06:29.331269 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:06:29.331306 systemd[1]: Stopped ignition-setup.service. Sep 13 00:06:29.332517 systemd[1]: Stopping iscsiuio.service... Sep 13 00:06:29.337417 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:06:29.337900 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:06:29.337980 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:06:29.340957 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:06:29.341036 systemd[1]: Stopped iscsiuio.service. Sep 13 00:06:29.342646 systemd[1]: Stopped target network.target. Sep 13 00:06:29.343660 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:06:29.343691 systemd[1]: Closed iscsiuio.socket. Sep 13 00:06:29.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.345002 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:06:29.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.346156 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:06:29.354520 systemd-networkd[745]: eth0: DHCPv6 lease lost Sep 13 00:06:29.356188 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:06:29.356285 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:06:29.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.358837 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:06:29.358923 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:06:29.359974 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:06:29.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.360004 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:06:29.368000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:06:29.368000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:06:29.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.361643 systemd[1]: Stopping network-cleanup.service... Sep 13 00:06:29.362788 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:06:29.362849 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:06:29.364216 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:06:29.364249 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:06:29.367053 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:06:29.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.367095 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:06:29.368951 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:06:29.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.373849 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:06:29.376550 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:06:29.376647 systemd[1]: Stopped network-cleanup.service. Sep 13 00:06:29.379164 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:06:29.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.379282 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:06:29.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.380519 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:06:29.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.380555 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:06:29.381559 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:06:29.381588 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:06:29.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.383282 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:06:29.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.384447 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:06:29.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.386034 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:06:29.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.386081 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:06:29.387137 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:06:29.387172 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:06:29.389379 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:06:29.390720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:29.390777 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:06:29.392405 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:06:29.392535 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:06:29.393257 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:06:29.393302 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:06:29.394494 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:06:29.394572 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:06:29.395563 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:06:29.397352 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:06:29.403774 systemd[1]: Switching root. Sep 13 00:06:29.421707 systemd-journald[290]: Journal stopped Sep 13 00:06:31.506065 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Sep 13 00:06:31.506121 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:06:31.506136 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:06:31.506146 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:06:31.506159 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:06:31.506172 kernel: SELinux: policy capability open_perms=1 Sep 13 00:06:31.506182 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:06:31.506191 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:06:31.506202 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:06:31.506212 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:06:31.506226 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:06:31.506236 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:06:31.506246 systemd[1]: Successfully loaded SELinux policy in 33.015ms. Sep 13 00:06:31.506259 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.091ms. Sep 13 00:06:31.506271 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:06:31.506281 systemd[1]: Detected virtualization kvm. Sep 13 00:06:31.506291 systemd[1]: Detected architecture arm64. Sep 13 00:06:31.506303 systemd[1]: Detected first boot. Sep 13 00:06:31.506313 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:06:31.506325 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:06:31.506334 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:06:31.506345 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:06:31.506358 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:06:31.506369 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:31.506380 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:06:31.506391 systemd[1]: Stopped initrd-switch-root.service. Sep 13 00:06:31.506401 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:06:31.506411 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:06:31.506421 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:06:31.506434 systemd[1]: Created slice system-getty.slice. Sep 13 00:06:31.506444 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:06:31.506458 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:06:31.506483 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:06:31.506495 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:06:31.506505 systemd[1]: Created slice user.slice. Sep 13 00:06:31.506515 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:06:31.506525 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:06:31.506536 systemd[1]: Set up automount boot.automount. Sep 13 00:06:31.506548 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:06:31.506559 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 00:06:31.506573 systemd[1]: Stopped target initrd-fs.target. Sep 13 00:06:31.506584 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 00:06:31.506594 systemd[1]: Reached target integritysetup.target. Sep 13 00:06:31.506604 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:06:31.506615 systemd[1]: Reached target remote-fs.target. Sep 13 00:06:31.506625 systemd[1]: Reached target slices.target. Sep 13 00:06:31.506636 systemd[1]: Reached target swap.target. Sep 13 00:06:31.506646 systemd[1]: Reached target torcx.target. Sep 13 00:06:31.506657 systemd[1]: Reached target veritysetup.target. Sep 13 00:06:31.506668 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:06:31.506680 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:06:31.506690 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:06:31.506700 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:06:31.506710 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:06:31.506720 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:06:31.506731 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:06:31.506742 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:06:31.506753 systemd[1]: Mounting media.mount... Sep 13 00:06:31.506763 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:06:31.506779 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:06:31.506792 systemd[1]: Mounting tmp.mount... Sep 13 00:06:31.506803 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:06:31.506814 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:06:31.506824 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:06:31.506834 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:06:31.506846 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:06:31.506856 systemd[1]: Starting modprobe@drm.service... Sep 13 00:06:31.506867 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:06:31.506877 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:06:31.506887 systemd[1]: Starting modprobe@loop.service... Sep 13 00:06:31.506898 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:06:31.506911 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:06:31.506922 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 00:06:31.506933 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:06:31.506944 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:06:31.506954 kernel: loop: module loaded Sep 13 00:06:31.506965 systemd[1]: Stopped systemd-journald.service. Sep 13 00:06:31.506976 kernel: fuse: init (API version 7.34) Sep 13 00:06:31.506986 systemd[1]: Starting systemd-journald.service... Sep 13 00:06:31.506997 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:06:31.507008 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:06:31.507018 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:06:31.507028 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:06:31.507039 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:06:31.507049 systemd[1]: Stopped verity-setup.service. Sep 13 00:06:31.507060 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:06:31.507071 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:06:31.507081 systemd[1]: Mounted media.mount. Sep 13 00:06:31.507093 systemd-journald[1006]: Journal started Sep 13 00:06:31.507132 systemd-journald[1006]: Runtime Journal (/run/log/journal/819d0cafb87640a38a14d6d066fad056) is 6.0M, max 48.7M, 42.6M free. Sep 13 00:06:29.477000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:06:29.629000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:06:29.629000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:06:29.629000 audit: BPF prog-id=10 op=LOAD Sep 13 00:06:29.629000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:06:29.629000 audit: BPF prog-id=11 op=LOAD Sep 13 00:06:29.629000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:06:29.679000 audit[936]: AVC avc: denied { associate } for pid=936 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:06:29.679000 audit[936]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58b4 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=919 pid=936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:29.679000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:06:29.680000 audit[936]: AVC avc: denied { associate } for pid=936 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:06:29.680000 audit[936]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5989 a2=1ed a3=0 items=2 ppid=919 pid=936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:29.680000 audit: CWD cwd="/" Sep 13 00:06:29.680000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:06:29.680000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:06:29.680000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:06:31.403000 audit: BPF prog-id=12 op=LOAD Sep 13 00:06:31.403000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:06:31.403000 audit: BPF prog-id=13 op=LOAD Sep 13 00:06:31.403000 audit: BPF prog-id=14 op=LOAD Sep 13 00:06:31.403000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:06:31.403000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:06:31.404000 audit: BPF prog-id=15 op=LOAD Sep 13 00:06:31.404000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:06:31.404000 audit: BPF prog-id=16 op=LOAD Sep 13 00:06:31.507687 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:06:31.404000 audit: BPF prog-id=17 op=LOAD Sep 13 00:06:31.404000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:06:31.404000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:06:31.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.418000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:06:31.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.488000 audit: BPF prog-id=18 op=LOAD Sep 13 00:06:31.488000 audit: BPF prog-id=19 op=LOAD Sep 13 00:06:31.488000 audit: BPF prog-id=20 op=LOAD Sep 13 00:06:31.488000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:06:31.488000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:06:31.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.504000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:06:31.504000 audit[1006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc09a6e00 a2=4000 a3=1 items=0 ppid=1 pid=1006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:31.504000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:06:31.402799 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:06:29.677594 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:06:31.402812 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:06:29.677881 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:06:31.405916 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:06:29.677906 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:06:29.677938 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 00:06:29.677948 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 00:06:29.677979 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 00:06:29.677992 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 00:06:31.508870 systemd[1]: Started systemd-journald.service. Sep 13 00:06:29.678238 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 00:06:29.678279 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:06:29.678291 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:06:29.679195 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 00:06:29.679238 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 00:06:29.679257 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 00:06:29.679271 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 00:06:29.679289 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 00:06:29.679303 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 00:06:31.157577 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:31Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:06:31.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.157850 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:31Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:06:31.157953 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:31Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:06:31.158123 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:31Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:06:31.158173 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:31Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 00:06:31.158232 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-09-13T00:06:31Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 00:06:31.509976 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:06:31.510705 systemd[1]: Mounted tmp.mount. Sep 13 00:06:31.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.511839 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:06:31.512696 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:06:31.512851 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:06:31.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.513721 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:31.513884 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:06:31.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.514722 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:06:31.514881 systemd[1]: Finished modprobe@drm.service. Sep 13 00:06:31.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.515755 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:31.515891 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:06:31.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.516913 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:06:31.517060 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:06:31.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.517970 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:31.518109 systemd[1]: Finished modprobe@loop.service. Sep 13 00:06:31.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.520906 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:06:31.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.521972 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:06:31.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.522994 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:06:31.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.524021 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:06:31.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.525188 systemd[1]: Reached target network-pre.target. Sep 13 00:06:31.527068 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:06:31.528941 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:06:31.529547 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:06:31.531096 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:06:31.532955 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:06:31.533723 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:31.534748 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:06:31.535402 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:06:31.536561 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:06:31.538356 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:06:31.540298 systemd-journald[1006]: Time spent on flushing to /var/log/journal/819d0cafb87640a38a14d6d066fad056 is 13.253ms for 996 entries. Sep 13 00:06:31.540298 systemd-journald[1006]: System Journal (/var/log/journal/819d0cafb87640a38a14d6d066fad056) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:06:31.558645 systemd-journald[1006]: Received client request to flush runtime journal. Sep 13 00:06:31.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.541544 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:06:31.543350 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:06:31.545770 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:06:31.548590 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:06:31.551945 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:06:31.554126 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:06:31.558592 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:06:31.559587 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:06:31.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.562092 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:06:31.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.563123 udevadm[1037]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:06:31.908802 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:06:31.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.909000 audit: BPF prog-id=21 op=LOAD Sep 13 00:06:31.909000 audit: BPF prog-id=22 op=LOAD Sep 13 00:06:31.909000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:06:31.909000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:06:31.910819 systemd[1]: Starting systemd-udevd.service... Sep 13 00:06:31.925968 systemd-udevd[1039]: Using default interface naming scheme 'v252'. Sep 13 00:06:31.938820 systemd[1]: Started systemd-udevd.service. Sep 13 00:06:31.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.939000 audit: BPF prog-id=23 op=LOAD Sep 13 00:06:31.942836 systemd[1]: Starting systemd-networkd.service... Sep 13 00:06:31.947000 audit: BPF prog-id=24 op=LOAD Sep 13 00:06:31.947000 audit: BPF prog-id=25 op=LOAD Sep 13 00:06:31.947000 audit: BPF prog-id=26 op=LOAD Sep 13 00:06:31.948879 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:06:31.959780 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 13 00:06:31.975994 systemd[1]: Started systemd-userdbd.service. Sep 13 00:06:31.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:31.996191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:06:32.024489 systemd-networkd[1051]: lo: Link UP Sep 13 00:06:32.024497 systemd-networkd[1051]: lo: Gained carrier Sep 13 00:06:32.024871 systemd-networkd[1051]: Enumeration completed Sep 13 00:06:32.024961 systemd[1]: Started systemd-networkd.service. Sep 13 00:06:32.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.026256 systemd-networkd[1051]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:06:32.027392 systemd-networkd[1051]: eth0: Link UP Sep 13 00:06:32.027400 systemd-networkd[1051]: eth0: Gained carrier Sep 13 00:06:32.044926 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:06:32.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.046909 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:06:32.047945 systemd-networkd[1051]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:06:32.054663 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:32.077283 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:06:32.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.078152 systemd[1]: Reached target cryptsetup.target. Sep 13 00:06:32.079920 systemd[1]: Starting lvm2-activation.service... Sep 13 00:06:32.083118 lvm[1073]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:32.107367 systemd[1]: Finished lvm2-activation.service. Sep 13 00:06:32.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.108248 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:06:32.109001 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:06:32.109025 systemd[1]: Reached target local-fs.target. Sep 13 00:06:32.109662 systemd[1]: Reached target machines.target. Sep 13 00:06:32.111369 systemd[1]: Starting ldconfig.service... Sep 13 00:06:32.112345 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:06:32.112403 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:06:32.113566 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:06:32.115344 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:06:32.118815 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:06:32.123426 systemd[1]: Starting systemd-sysext.service... Sep 13 00:06:32.125736 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:06:32.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.127805 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1075 (bootctl) Sep 13 00:06:32.130299 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:06:32.136884 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:06:32.140846 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:06:32.141038 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:06:32.156502 kernel: loop0: detected capacity change from 0 to 211168 Sep 13 00:06:32.198880 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:06:32.199486 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:06:32.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.207494 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:06:32.224064 systemd-fsck[1084]: fsck.fat 4.2 (2021-01-31) Sep 13 00:06:32.224064 systemd-fsck[1084]: /dev/vda1: 236 files, 117310/258078 clusters Sep 13 00:06:32.227439 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:06:32.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.230630 kernel: loop1: detected capacity change from 0 to 211168 Sep 13 00:06:32.230902 systemd[1]: Mounting boot.mount... Sep 13 00:06:32.236137 (sd-sysext)[1087]: Using extensions 'kubernetes'. Sep 13 00:06:32.236686 (sd-sysext)[1087]: Merged extensions into '/usr'. Sep 13 00:06:32.245646 systemd[1]: Mounted boot.mount. Sep 13 00:06:32.254400 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:06:32.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.255862 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:06:32.257167 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:06:32.259081 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:06:32.260929 systemd[1]: Starting modprobe@loop.service... Sep 13 00:06:32.261819 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:06:32.261948 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:06:32.262702 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:32.262832 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:06:32.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.264059 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:32.264173 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:06:32.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.265627 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:32.265733 systemd[1]: Finished modprobe@loop.service. Sep 13 00:06:32.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.266909 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:32.267003 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:06:32.301058 ldconfig[1074]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:06:32.304428 systemd[1]: Finished ldconfig.service. Sep 13 00:06:32.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.504618 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:06:32.509436 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:06:32.511219 systemd[1]: Finished systemd-sysext.service. Sep 13 00:06:32.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.513187 systemd[1]: Starting ensure-sysext.service... Sep 13 00:06:32.514732 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:06:32.518783 systemd[1]: Reloading. Sep 13 00:06:32.526333 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:06:32.528623 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:06:32.531532 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:06:32.566492 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-09-13T00:06:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:06:32.566518 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-09-13T00:06:32Z" level=info msg="torcx already run" Sep 13 00:06:32.630375 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:06:32.630394 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:06:32.647931 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:32.694000 audit: BPF prog-id=27 op=LOAD Sep 13 00:06:32.694000 audit: BPF prog-id=28 op=LOAD Sep 13 00:06:32.694000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:06:32.694000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:06:32.696000 audit: BPF prog-id=29 op=LOAD Sep 13 00:06:32.696000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:06:32.696000 audit: BPF prog-id=30 op=LOAD Sep 13 00:06:32.696000 audit: BPF prog-id=31 op=LOAD Sep 13 00:06:32.696000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:06:32.696000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:06:32.697000 audit: BPF prog-id=32 op=LOAD Sep 13 00:06:32.697000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:06:32.697000 audit: BPF prog-id=33 op=LOAD Sep 13 00:06:32.697000 audit: BPF prog-id=34 op=LOAD Sep 13 00:06:32.697000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:06:32.697000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:06:32.699000 audit: BPF prog-id=35 op=LOAD Sep 13 00:06:32.699000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:06:32.701332 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:06:32.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.706345 systemd[1]: Starting audit-rules.service... Sep 13 00:06:32.708421 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:06:32.710741 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:06:32.712000 audit: BPF prog-id=36 op=LOAD Sep 13 00:06:32.714000 audit: BPF prog-id=37 op=LOAD Sep 13 00:06:32.713544 systemd[1]: Starting systemd-resolved.service... Sep 13 00:06:32.715945 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:06:32.717920 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:06:32.719273 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:06:32.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.722000 audit[1164]: SYSTEM_BOOT pid=1164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.725541 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:06:32.727152 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:06:32.729130 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:06:32.731437 systemd[1]: Starting modprobe@loop.service... Sep 13 00:06:32.732216 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:06:32.732418 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:06:32.732590 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:06:32.733884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:32.734058 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:06:32.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.735388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:32.735519 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:06:32.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.736824 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:32.736938 systemd[1]: Finished modprobe@loop.service. Sep 13 00:06:32.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.738086 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:32.738227 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:06:32.739853 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:06:32.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:32.741704 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:06:32.743204 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:06:32.744000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:06:32.744000 audit[1176]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe9550170 a2=420 a3=0 items=0 ppid=1153 pid=1176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:32.744000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:06:32.744989 augenrules[1176]: No rules Sep 13 00:06:32.745052 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:06:32.747642 systemd[1]: Starting modprobe@loop.service... Sep 13 00:06:32.748305 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:06:32.748434 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:06:32.748631 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:06:32.749489 systemd[1]: Finished audit-rules.service. Sep 13 00:06:32.750654 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:06:32.751824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:32.751954 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:06:32.753020 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:32.753144 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:06:32.754284 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:32.754402 systemd[1]: Finished modprobe@loop.service. Sep 13 00:06:32.757974 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:06:32.759367 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:06:32.761340 systemd[1]: Starting modprobe@drm.service... Sep 13 00:06:32.763295 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:06:32.765350 systemd[1]: Starting modprobe@loop.service... Sep 13 00:06:32.766140 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:06:32.766266 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:06:32.767616 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:06:32.769958 systemd[1]: Starting systemd-update-done.service... Sep 13 00:06:32.770855 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:06:32.772080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:32.772240 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:06:32.773404 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:06:32.773561 systemd[1]: Finished modprobe@drm.service. Sep 13 00:06:32.774634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:32.774773 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:06:32.775893 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:32.776007 systemd[1]: Finished modprobe@loop.service. Sep 13 00:06:32.777273 systemd[1]: Finished systemd-update-done.service. Sep 13 00:06:32.778852 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:32.778938 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:06:32.780098 systemd[1]: Finished ensure-sysext.service. Sep 13 00:06:32.783978 systemd-resolved[1162]: Positive Trust Anchors: Sep 13 00:06:32.783990 systemd-resolved[1162]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:06:32.784018 systemd-resolved[1162]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:06:32.785138 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:06:33.198882 systemd-timesyncd[1163]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:06:33.199212 systemd-timesyncd[1163]: Initial clock synchronization to Sat 2025-09-13 00:06:33.198746 UTC. Sep 13 00:06:33.199218 systemd[1]: Reached target time-set.target. Sep 13 00:06:33.205625 systemd-resolved[1162]: Defaulting to hostname 'linux'. Sep 13 00:06:33.207203 systemd[1]: Started systemd-resolved.service. Sep 13 00:06:33.207900 systemd[1]: Reached target network.target. Sep 13 00:06:33.208540 systemd[1]: Reached target nss-lookup.target. Sep 13 00:06:33.209132 systemd[1]: Reached target sysinit.target. Sep 13 00:06:33.209783 systemd[1]: Started motdgen.path. Sep 13 00:06:33.210425 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:06:33.211486 systemd[1]: Started logrotate.timer. Sep 13 00:06:33.212148 systemd[1]: Started mdadm.timer. Sep 13 00:06:33.212657 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:06:33.213294 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:06:33.213319 systemd[1]: Reached target paths.target. Sep 13 00:06:33.213851 systemd[1]: Reached target timers.target. Sep 13 00:06:33.214785 systemd[1]: Listening on dbus.socket. Sep 13 00:06:33.216492 systemd[1]: Starting docker.socket... Sep 13 00:06:33.219458 systemd[1]: Listening on sshd.socket. Sep 13 00:06:33.220141 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:06:33.220601 systemd[1]: Listening on docker.socket. Sep 13 00:06:33.221299 systemd[1]: Reached target sockets.target. Sep 13 00:06:33.221878 systemd[1]: Reached target basic.target. Sep 13 00:06:33.222558 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:06:33.222592 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:06:33.223593 systemd[1]: Starting containerd.service... Sep 13 00:06:33.225422 systemd[1]: Starting dbus.service... Sep 13 00:06:33.227359 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:06:33.229468 systemd[1]: Starting extend-filesystems.service... Sep 13 00:06:33.230504 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:06:33.231701 systemd[1]: Starting motdgen.service... Sep 13 00:06:33.232334 jq[1196]: false Sep 13 00:06:33.233593 systemd[1]: Starting prepare-helm.service... Sep 13 00:06:33.235772 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:06:33.237938 systemd[1]: Starting sshd-keygen.service... Sep 13 00:06:33.240819 systemd[1]: Starting systemd-logind.service... Sep 13 00:06:33.241556 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:06:33.241649 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:06:33.242073 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:06:33.242865 systemd[1]: Starting update-engine.service... Sep 13 00:06:33.244672 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:06:33.248512 jq[1209]: true Sep 13 00:06:33.247345 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:06:33.247549 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:06:33.248636 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:06:33.248812 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:06:33.249463 extend-filesystems[1197]: Found loop1 Sep 13 00:06:33.250351 extend-filesystems[1197]: Found vda Sep 13 00:06:33.250351 extend-filesystems[1197]: Found vda1 Sep 13 00:06:33.250351 extend-filesystems[1197]: Found vda2 Sep 13 00:06:33.250351 extend-filesystems[1197]: Found vda3 Sep 13 00:06:33.250351 extend-filesystems[1197]: Found usr Sep 13 00:06:33.250351 extend-filesystems[1197]: Found vda4 Sep 13 00:06:33.250351 extend-filesystems[1197]: Found vda6 Sep 13 00:06:33.250351 extend-filesystems[1197]: Found vda7 Sep 13 00:06:33.250351 extend-filesystems[1197]: Found vda9 Sep 13 00:06:33.250351 extend-filesystems[1197]: Checking size of /dev/vda9 Sep 13 00:06:33.261900 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:06:33.262194 systemd[1]: Finished motdgen.service. Sep 13 00:06:33.264687 jq[1216]: true Sep 13 00:06:33.278385 dbus-daemon[1195]: [system] SELinux support is enabled Sep 13 00:06:33.278609 systemd[1]: Started dbus.service. Sep 13 00:06:33.291313 tar[1213]: linux-arm64/LICENSE Sep 13 00:06:33.291313 tar[1213]: linux-arm64/helm Sep 13 00:06:33.281854 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:06:33.281881 systemd[1]: Reached target system-config.target. Sep 13 00:06:33.282858 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:06:33.282878 systemd[1]: Reached target user-config.target. Sep 13 00:06:33.296372 extend-filesystems[1197]: Resized partition /dev/vda9 Sep 13 00:06:33.304154 systemd-logind[1207]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:06:33.304958 extend-filesystems[1245]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:06:33.305006 systemd-logind[1207]: New seat seat0. Sep 13 00:06:33.313129 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:06:33.316597 systemd[1]: Started systemd-logind.service. Sep 13 00:06:33.321184 bash[1246]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:06:33.329589 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:06:33.341820 update_engine[1208]: I0913 00:06:33.341471 1208 main.cc:92] Flatcar Update Engine starting Sep 13 00:06:33.344233 systemd[1]: Started update-engine.service. Sep 13 00:06:33.347548 systemd[1]: Started locksmithd.service. Sep 13 00:06:33.349445 update_engine[1208]: I0913 00:06:33.349323 1208 update_check_scheduler.cc:74] Next update check in 11m56s Sep 13 00:06:33.350167 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:06:33.361741 extend-filesystems[1245]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:06:33.361741 extend-filesystems[1245]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:06:33.361741 extend-filesystems[1245]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:06:33.367767 extend-filesystems[1197]: Resized filesystem in /dev/vda9 Sep 13 00:06:33.363984 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:06:33.364168 systemd[1]: Finished extend-filesystems.service. Sep 13 00:06:33.368966 env[1219]: time="2025-09-13T00:06:33.368917061Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:06:33.398580 env[1219]: time="2025-09-13T00:06:33.398523861Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:06:33.398717 env[1219]: time="2025-09-13T00:06:33.398693701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:33.405288 env[1219]: time="2025-09-13T00:06:33.405246221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:33.405288 env[1219]: time="2025-09-13T00:06:33.405283141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:33.405560 env[1219]: time="2025-09-13T00:06:33.405533701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:33.408198 env[1219]: time="2025-09-13T00:06:33.408170101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:33.408294 env[1219]: time="2025-09-13T00:06:33.408204701Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:06:33.408294 env[1219]: time="2025-09-13T00:06:33.408217901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:33.408342 env[1219]: time="2025-09-13T00:06:33.408325021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:33.408603 env[1219]: time="2025-09-13T00:06:33.408576461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:33.408923 locksmithd[1249]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:06:33.409339 env[1219]: time="2025-09-13T00:06:33.409302821Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:33.409375 env[1219]: time="2025-09-13T00:06:33.409338181Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:06:33.409450 env[1219]: time="2025-09-13T00:06:33.409428901Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:06:33.409524 env[1219]: time="2025-09-13T00:06:33.409448261Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:06:33.413089 env[1219]: time="2025-09-13T00:06:33.413049261Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:06:33.413089 env[1219]: time="2025-09-13T00:06:33.413090901Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:06:33.413234 env[1219]: time="2025-09-13T00:06:33.413135061Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:06:33.413234 env[1219]: time="2025-09-13T00:06:33.413174861Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:06:33.413234 env[1219]: time="2025-09-13T00:06:33.413189941Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:06:33.413234 env[1219]: time="2025-09-13T00:06:33.413205061Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:06:33.413234 env[1219]: time="2025-09-13T00:06:33.413221141Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:06:33.413635 env[1219]: time="2025-09-13T00:06:33.413607821Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:06:33.413698 env[1219]: time="2025-09-13T00:06:33.413637541Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:06:33.413698 env[1219]: time="2025-09-13T00:06:33.413653061Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:06:33.413698 env[1219]: time="2025-09-13T00:06:33.413666661Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:06:33.413698 env[1219]: time="2025-09-13T00:06:33.413679461Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:06:33.413853 env[1219]: time="2025-09-13T00:06:33.413816781Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:06:33.413956 env[1219]: time="2025-09-13T00:06:33.413902661Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414218661Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414273741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414289981Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414454501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414470781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414483861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414495501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414508061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414522221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414535381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414547261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414561821Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414701901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414718061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:06:33.415147 env[1219]: time="2025-09-13T00:06:33.414730941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:06:33.415594 env[1219]: time="2025-09-13T00:06:33.414742821Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:06:33.415594 env[1219]: time="2025-09-13T00:06:33.414757381Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:06:33.415594 env[1219]: time="2025-09-13T00:06:33.414770461Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:06:33.415594 env[1219]: time="2025-09-13T00:06:33.414788021Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:06:33.415594 env[1219]: time="2025-09-13T00:06:33.414823221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:06:33.415697 env[1219]: time="2025-09-13T00:06:33.415015181Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:06:33.415697 env[1219]: time="2025-09-13T00:06:33.415067421Z" level=info msg="Connect containerd service" Sep 13 00:06:33.415697 env[1219]: time="2025-09-13T00:06:33.415098781Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:06:33.416642 env[1219]: time="2025-09-13T00:06:33.416014621Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:06:33.416642 env[1219]: time="2025-09-13T00:06:33.416226701Z" level=info msg="Start subscribing containerd event" Sep 13 00:06:33.416642 env[1219]: time="2025-09-13T00:06:33.416283301Z" level=info msg="Start recovering state" Sep 13 00:06:33.416642 env[1219]: time="2025-09-13T00:06:33.416545701Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:06:33.416642 env[1219]: time="2025-09-13T00:06:33.416599981Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:06:33.416773 systemd[1]: Started containerd.service. Sep 13 00:06:33.416930 env[1219]: time="2025-09-13T00:06:33.416909261Z" level=info msg="Start event monitor" Sep 13 00:06:33.417004 env[1219]: time="2025-09-13T00:06:33.416989301Z" level=info msg="Start snapshots syncer" Sep 13 00:06:33.417060 env[1219]: time="2025-09-13T00:06:33.417047021Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:06:33.417138 env[1219]: time="2025-09-13T00:06:33.417106981Z" level=info msg="Start streaming server" Sep 13 00:06:33.417750 env[1219]: time="2025-09-13T00:06:33.417711781Z" level=info msg="containerd successfully booted in 0.049634s" Sep 13 00:06:33.677804 tar[1213]: linux-arm64/README.md Sep 13 00:06:33.682149 systemd[1]: Finished prepare-helm.service. Sep 13 00:06:33.778228 systemd-networkd[1051]: eth0: Gained IPv6LL Sep 13 00:06:33.779909 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:06:33.781071 systemd[1]: Reached target network-online.target. Sep 13 00:06:33.783460 systemd[1]: Starting kubelet.service... Sep 13 00:06:34.500185 systemd[1]: Started kubelet.service. Sep 13 00:06:34.929931 sshd_keygen[1214]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:06:34.949413 systemd[1]: Finished sshd-keygen.service. Sep 13 00:06:34.951758 systemd[1]: Starting issuegen.service... Sep 13 00:06:34.956311 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:06:34.956472 systemd[1]: Finished issuegen.service. Sep 13 00:06:34.958602 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:06:34.964836 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:06:34.966910 systemd[1]: Started getty@tty1.service. Sep 13 00:06:34.969020 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 13 00:06:34.970865 systemd[1]: Reached target getty.target. Sep 13 00:06:34.971682 systemd[1]: Reached target multi-user.target. Sep 13 00:06:34.974436 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:06:34.987025 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:06:34.987204 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:06:34.988349 systemd[1]: Startup finished in 560ms (kernel) + 4.891s (initrd) + 5.132s (userspace) = 10.585s. Sep 13 00:06:34.990012 kubelet[1264]: E0913 00:06:34.989974 1264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:34.992864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:34.992983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:06:38.059672 systemd[1]: Created slice system-sshd.slice. Sep 13 00:06:38.061160 systemd[1]: Started sshd@0-10.0.0.24:22-10.0.0.1:49332.service. Sep 13 00:06:38.126219 sshd[1287]: Accepted publickey for core from 10.0.0.1 port 49332 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:06:38.128299 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:38.139813 systemd-logind[1207]: New session 1 of user core. Sep 13 00:06:38.140789 systemd[1]: Created slice user-500.slice. Sep 13 00:06:38.141953 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:06:38.151200 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:06:38.152786 systemd[1]: Starting user@500.service... Sep 13 00:06:38.157006 (systemd)[1290]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:38.230465 systemd[1290]: Queued start job for default target default.target. Sep 13 00:06:38.231014 systemd[1290]: Reached target paths.target. Sep 13 00:06:38.231048 systemd[1290]: Reached target sockets.target. Sep 13 00:06:38.231059 systemd[1290]: Reached target timers.target. Sep 13 00:06:38.231068 systemd[1290]: Reached target basic.target. Sep 13 00:06:38.231105 systemd[1290]: Reached target default.target. Sep 13 00:06:38.231141 systemd[1290]: Startup finished in 67ms. Sep 13 00:06:38.231213 systemd[1]: Started user@500.service. Sep 13 00:06:38.232239 systemd[1]: Started session-1.scope. Sep 13 00:06:38.284589 systemd[1]: Started sshd@1-10.0.0.24:22-10.0.0.1:49340.service. Sep 13 00:06:38.327833 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 49340 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:06:38.329369 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:38.336372 systemd-logind[1207]: New session 2 of user core. Sep 13 00:06:38.338082 systemd[1]: Started session-2.scope. Sep 13 00:06:38.398421 sshd[1299]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:38.401227 systemd[1]: sshd@1-10.0.0.24:22-10.0.0.1:49340.service: Deactivated successfully. Sep 13 00:06:38.401845 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:06:38.402379 systemd-logind[1207]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:06:38.403788 systemd[1]: Started sshd@2-10.0.0.24:22-10.0.0.1:49342.service. Sep 13 00:06:38.404528 systemd-logind[1207]: Removed session 2. Sep 13 00:06:38.444769 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 49342 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:06:38.448708 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:38.456954 systemd-logind[1207]: New session 3 of user core. Sep 13 00:06:38.457872 systemd[1]: Started session-3.scope. Sep 13 00:06:38.510225 sshd[1305]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:38.514852 systemd[1]: Started sshd@3-10.0.0.24:22-10.0.0.1:49350.service. Sep 13 00:06:38.516262 systemd[1]: sshd@2-10.0.0.24:22-10.0.0.1:49342.service: Deactivated successfully. Sep 13 00:06:38.517097 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:06:38.517693 systemd-logind[1207]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:06:38.518523 systemd-logind[1207]: Removed session 3. Sep 13 00:06:38.553199 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 49350 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:06:38.554763 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:38.558829 systemd-logind[1207]: New session 4 of user core. Sep 13 00:06:38.559866 systemd[1]: Started session-4.scope. Sep 13 00:06:38.615928 sshd[1311]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:38.620287 systemd[1]: Started sshd@4-10.0.0.24:22-10.0.0.1:49354.service. Sep 13 00:06:38.625809 systemd[1]: sshd@3-10.0.0.24:22-10.0.0.1:49350.service: Deactivated successfully. Sep 13 00:06:38.626576 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:06:38.627199 systemd-logind[1207]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:06:38.628482 systemd-logind[1207]: Removed session 4. Sep 13 00:06:38.661539 sshd[1317]: Accepted publickey for core from 10.0.0.1 port 49354 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:06:38.663086 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:38.669591 systemd-logind[1207]: New session 5 of user core. Sep 13 00:06:38.670905 systemd[1]: Started session-5.scope. Sep 13 00:06:38.742278 sudo[1321]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:06:38.742522 sudo[1321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:06:38.784410 systemd[1]: Starting docker.service... Sep 13 00:06:38.849604 env[1333]: time="2025-09-13T00:06:38.849547941Z" level=info msg="Starting up" Sep 13 00:06:38.851922 env[1333]: time="2025-09-13T00:06:38.850981221Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:06:38.851922 env[1333]: time="2025-09-13T00:06:38.851015141Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:06:38.851922 env[1333]: time="2025-09-13T00:06:38.851038221Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:06:38.851922 env[1333]: time="2025-09-13T00:06:38.851049381Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:06:38.853653 env[1333]: time="2025-09-13T00:06:38.853625581Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:06:38.853738 env[1333]: time="2025-09-13T00:06:38.853725981Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:06:38.853796 env[1333]: time="2025-09-13T00:06:38.853782581Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:06:38.853844 env[1333]: time="2025-09-13T00:06:38.853832221Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:06:39.009331 env[1333]: time="2025-09-13T00:06:39.008661021Z" level=info msg="Loading containers: start." Sep 13 00:06:39.136153 kernel: Initializing XFRM netlink socket Sep 13 00:06:39.160986 env[1333]: time="2025-09-13T00:06:39.160945861Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:06:39.228380 systemd-networkd[1051]: docker0: Link UP Sep 13 00:06:39.246362 env[1333]: time="2025-09-13T00:06:39.246308501Z" level=info msg="Loading containers: done." Sep 13 00:06:39.266344 env[1333]: time="2025-09-13T00:06:39.266048341Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:06:39.266344 env[1333]: time="2025-09-13T00:06:39.266275901Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:06:39.266556 env[1333]: time="2025-09-13T00:06:39.266406621Z" level=info msg="Daemon has completed initialization" Sep 13 00:06:39.287541 systemd[1]: Started docker.service. Sep 13 00:06:39.292747 env[1333]: time="2025-09-13T00:06:39.292607901Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:06:39.956162 env[1219]: time="2025-09-13T00:06:39.956087141Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 00:06:40.565661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382087950.mount: Deactivated successfully. Sep 13 00:06:41.984056 env[1219]: time="2025-09-13T00:06:41.984010541Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:41.985465 env[1219]: time="2025-09-13T00:06:41.985434741Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:41.987340 env[1219]: time="2025-09-13T00:06:41.987317221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:41.989158 env[1219]: time="2025-09-13T00:06:41.989136061Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:41.990171 env[1219]: time="2025-09-13T00:06:41.990041061Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 13 00:06:41.991736 env[1219]: time="2025-09-13T00:06:41.991711581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 00:06:43.363865 env[1219]: time="2025-09-13T00:06:43.363789501Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:43.365251 env[1219]: time="2025-09-13T00:06:43.365221501Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:43.367040 env[1219]: time="2025-09-13T00:06:43.367012501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:43.369454 env[1219]: time="2025-09-13T00:06:43.369421501Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:43.370236 env[1219]: time="2025-09-13T00:06:43.370211101Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 13 00:06:43.370715 env[1219]: time="2025-09-13T00:06:43.370693421Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 00:06:44.623195 env[1219]: time="2025-09-13T00:06:44.623148181Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:44.624697 env[1219]: time="2025-09-13T00:06:44.624666981Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:44.626878 env[1219]: time="2025-09-13T00:06:44.626843821Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:44.628845 env[1219]: time="2025-09-13T00:06:44.628814701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:44.629713 env[1219]: time="2025-09-13T00:06:44.629682341Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 13 00:06:44.630811 env[1219]: time="2025-09-13T00:06:44.630785461Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:06:45.243828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:06:45.243997 systemd[1]: Stopped kubelet.service. Sep 13 00:06:45.245396 systemd[1]: Starting kubelet.service... Sep 13 00:06:45.339999 systemd[1]: Started kubelet.service. Sep 13 00:06:45.379854 kubelet[1469]: E0913 00:06:45.379809 1469 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:45.382869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:45.383007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:06:46.150085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount731529124.mount: Deactivated successfully. Sep 13 00:06:46.744410 env[1219]: time="2025-09-13T00:06:46.744332221Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:46.747019 env[1219]: time="2025-09-13T00:06:46.746979261Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:46.749382 env[1219]: time="2025-09-13T00:06:46.749337741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:46.751141 env[1219]: time="2025-09-13T00:06:46.751103021Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:46.751594 env[1219]: time="2025-09-13T00:06:46.751570221Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 13 00:06:46.752276 env[1219]: time="2025-09-13T00:06:46.752215421Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 00:06:47.353867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1026199106.mount: Deactivated successfully. Sep 13 00:06:48.506503 env[1219]: time="2025-09-13T00:06:48.506441421Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:48.508794 env[1219]: time="2025-09-13T00:06:48.508761381Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:48.511398 env[1219]: time="2025-09-13T00:06:48.511357021Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:48.514092 env[1219]: time="2025-09-13T00:06:48.514047861Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:48.514454 env[1219]: time="2025-09-13T00:06:48.514426501Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 13 00:06:48.514998 env[1219]: time="2025-09-13T00:06:48.514885381Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:06:48.984017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3229828552.mount: Deactivated successfully. Sep 13 00:06:48.993237 env[1219]: time="2025-09-13T00:06:48.993168621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:48.995921 env[1219]: time="2025-09-13T00:06:48.995883181Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:48.998679 env[1219]: time="2025-09-13T00:06:48.998646501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:49.001479 env[1219]: time="2025-09-13T00:06:49.001449341Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:49.003543 env[1219]: time="2025-09-13T00:06:49.003517501Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:06:49.004570 env[1219]: time="2025-09-13T00:06:49.004095341Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 00:06:49.615993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781019088.mount: Deactivated successfully. Sep 13 00:06:52.111823 env[1219]: time="2025-09-13T00:06:52.111775581Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:52.113691 env[1219]: time="2025-09-13T00:06:52.113655341Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:52.116083 env[1219]: time="2025-09-13T00:06:52.116055141Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:52.118196 env[1219]: time="2025-09-13T00:06:52.118157861Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:52.118790 env[1219]: time="2025-09-13T00:06:52.118748821Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 13 00:06:55.633821 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:06:55.633992 systemd[1]: Stopped kubelet.service. Sep 13 00:06:55.635392 systemd[1]: Starting kubelet.service... Sep 13 00:06:55.733304 systemd[1]: Started kubelet.service. Sep 13 00:06:55.775608 kubelet[1503]: E0913 00:06:55.775557 1503 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:55.779650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:55.779780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:06:57.615671 systemd[1]: Stopped kubelet.service. Sep 13 00:06:57.617616 systemd[1]: Starting kubelet.service... Sep 13 00:06:57.639731 systemd[1]: Reloading. Sep 13 00:06:57.704208 /usr/lib/systemd/system-generators/torcx-generator[1539]: time="2025-09-13T00:06:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:06:57.704239 /usr/lib/systemd/system-generators/torcx-generator[1539]: time="2025-09-13T00:06:57Z" level=info msg="torcx already run" Sep 13 00:06:57.782898 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:06:57.782918 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:06:57.798737 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:57.866843 systemd[1]: Started kubelet.service. Sep 13 00:06:57.870517 systemd[1]: Stopping kubelet.service... Sep 13 00:06:57.870760 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:06:57.870938 systemd[1]: Stopped kubelet.service. Sep 13 00:06:57.872518 systemd[1]: Starting kubelet.service... Sep 13 00:06:57.980098 systemd[1]: Started kubelet.service. Sep 13 00:06:58.025108 kubelet[1581]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:58.025531 kubelet[1581]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:06:58.025580 kubelet[1581]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:58.025716 kubelet[1581]: I0913 00:06:58.025683 1581 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:06:58.850487 kubelet[1581]: I0913 00:06:58.850440 1581 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:06:58.850487 kubelet[1581]: I0913 00:06:58.850475 1581 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:06:58.851079 kubelet[1581]: I0913 00:06:58.851046 1581 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:06:58.887563 kubelet[1581]: I0913 00:06:58.887525 1581 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:06:58.887735 kubelet[1581]: E0913 00:06:58.887709 1581 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:06:58.894040 kubelet[1581]: E0913 00:06:58.894013 1581 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:06:58.894209 kubelet[1581]: I0913 00:06:58.894193 1581 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:06:58.897682 kubelet[1581]: I0913 00:06:58.897655 1581 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:06:58.899085 kubelet[1581]: I0913 00:06:58.899048 1581 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:06:58.899683 kubelet[1581]: I0913 00:06:58.899200 1581 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:06:58.899889 kubelet[1581]: I0913 00:06:58.899876 1581 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:06:58.899947 kubelet[1581]: I0913 00:06:58.899938 1581 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:06:58.900218 kubelet[1581]: I0913 00:06:58.900204 1581 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:58.903379 kubelet[1581]: I0913 00:06:58.903356 1581 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:06:58.903468 kubelet[1581]: I0913 00:06:58.903457 1581 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:06:58.903571 kubelet[1581]: I0913 00:06:58.903560 1581 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:06:58.903645 kubelet[1581]: I0913 00:06:58.903635 1581 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:06:58.904662 kubelet[1581]: I0913 00:06:58.904645 1581 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:06:58.907338 kubelet[1581]: I0913 00:06:58.907305 1581 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:06:58.907448 kubelet[1581]: W0913 00:06:58.907433 1581 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:06:58.912633 kubelet[1581]: I0913 00:06:58.912613 1581 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:06:58.912732 kubelet[1581]: I0913 00:06:58.912718 1581 server.go:1289] "Started kubelet" Sep 13 00:06:58.916038 kubelet[1581]: E0913 00:06:58.916004 1581 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:06:58.917265 kubelet[1581]: I0913 00:06:58.917227 1581 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:06:58.929671 kubelet[1581]: I0913 00:06:58.929614 1581 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:06:58.929966 kubelet[1581]: I0913 00:06:58.929950 1581 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:06:58.935348 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:06:58.935458 kubelet[1581]: I0913 00:06:58.935439 1581 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:06:58.936699 kubelet[1581]: E0913 00:06:58.936636 1581 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:06:58.936822 kubelet[1581]: E0913 00:06:58.934479 1581 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.24:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.24:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864aedd072b743d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:06:58.912629821 +0000 UTC m=+0.928074721,LastTimestamp:2025-09-13 00:06:58.912629821 +0000 UTC m=+0.928074721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:06:58.937463 kubelet[1581]: I0913 00:06:58.936985 1581 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:06:58.937626 kubelet[1581]: E0913 00:06:58.937560 1581 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:06:58.937626 kubelet[1581]: E0913 00:06:58.937041 1581 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:06:58.937626 kubelet[1581]: I0913 00:06:58.937059 1581 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:06:58.938062 kubelet[1581]: I0913 00:06:58.937066 1581 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:06:58.938158 kubelet[1581]: I0913 00:06:58.936992 1581 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:06:58.938370 kubelet[1581]: I0913 00:06:58.938339 1581 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:06:58.938882 kubelet[1581]: E0913 00:06:58.938804 1581 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="200ms" Sep 13 00:06:58.938961 kubelet[1581]: E0913 00:06:58.938902 1581 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:06:58.938961 kubelet[1581]: I0913 00:06:58.938956 1581 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:06:58.939482 kubelet[1581]: I0913 00:06:58.939341 1581 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:06:58.939482 kubelet[1581]: I0913 00:06:58.939361 1581 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:06:58.951445 kubelet[1581]: I0913 00:06:58.950537 1581 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:06:58.951445 kubelet[1581]: I0913 00:06:58.950553 1581 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:06:58.951445 kubelet[1581]: I0913 00:06:58.950569 1581 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:58.951652 kubelet[1581]: I0913 00:06:58.951621 1581 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:06:58.952688 kubelet[1581]: I0913 00:06:58.952663 1581 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:06:58.952688 kubelet[1581]: I0913 00:06:58.952685 1581 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:06:58.952791 kubelet[1581]: I0913 00:06:58.952706 1581 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:06:58.952791 kubelet[1581]: I0913 00:06:58.952713 1581 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:06:58.952791 kubelet[1581]: E0913 00:06:58.952752 1581 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:06:59.027095 kubelet[1581]: I0913 00:06:59.027056 1581 policy_none.go:49] "None policy: Start" Sep 13 00:06:59.027095 kubelet[1581]: I0913 00:06:59.027087 1581 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:06:59.027095 kubelet[1581]: I0913 00:06:59.027099 1581 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:06:59.027531 kubelet[1581]: E0913 00:06:59.027495 1581 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:06:59.031700 systemd[1]: Created slice kubepods.slice. Sep 13 00:06:59.037159 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 00:06:59.037754 kubelet[1581]: E0913 00:06:59.037691 1581 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:06:59.040377 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 00:06:59.049806 kubelet[1581]: E0913 00:06:59.049775 1581 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:06:59.050030 kubelet[1581]: I0913 00:06:59.049919 1581 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:06:59.050030 kubelet[1581]: I0913 00:06:59.049935 1581 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:06:59.051001 kubelet[1581]: I0913 00:06:59.050984 1581 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:06:59.051498 kubelet[1581]: E0913 00:06:59.051427 1581 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:06:59.051574 kubelet[1581]: E0913 00:06:59.051506 1581 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:06:59.060894 systemd[1]: Created slice kubepods-burstable-pod837761f8fedf6bab5d8d9589aa9b9fe5.slice. Sep 13 00:06:59.071814 kubelet[1581]: E0913 00:06:59.071750 1581 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:06:59.074696 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 13 00:06:59.089086 kubelet[1581]: E0913 00:06:59.089051 1581 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:06:59.090937 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 13 00:06:59.092234 kubelet[1581]: E0913 00:06:59.092210 1581 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:06:59.140247 kubelet[1581]: I0913 00:06:59.139391 1581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/837761f8fedf6bab5d8d9589aa9b9fe5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"837761f8fedf6bab5d8d9589aa9b9fe5\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:06:59.140247 kubelet[1581]: I0913 00:06:59.139429 1581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:59.140247 kubelet[1581]: I0913 00:06:59.139449 1581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:59.140247 kubelet[1581]: I0913 00:06:59.139467 1581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:59.140247 kubelet[1581]: I0913 00:06:59.139481 1581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:06:59.140449 kubelet[1581]: I0913 00:06:59.139495 1581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/837761f8fedf6bab5d8d9589aa9b9fe5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"837761f8fedf6bab5d8d9589aa9b9fe5\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:06:59.140449 kubelet[1581]: I0913 00:06:59.139510 1581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/837761f8fedf6bab5d8d9589aa9b9fe5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"837761f8fedf6bab5d8d9589aa9b9fe5\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:06:59.140449 kubelet[1581]: I0913 00:06:59.139524 1581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:59.140449 kubelet[1581]: I0913 00:06:59.139538 1581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:06:59.140449 kubelet[1581]: E0913 00:06:59.139672 1581 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="400ms" Sep 13 00:06:59.152032 kubelet[1581]: I0913 00:06:59.151951 1581 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:06:59.152449 kubelet[1581]: E0913 00:06:59.152412 1581 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Sep 13 00:06:59.354737 kubelet[1581]: I0913 00:06:59.354235 1581 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:06:59.355072 kubelet[1581]: E0913 00:06:59.354663 1581 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Sep 13 00:06:59.373003 kubelet[1581]: E0913 00:06:59.372965 1581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:59.374070 env[1219]: time="2025-09-13T00:06:59.373653901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:837761f8fedf6bab5d8d9589aa9b9fe5,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:59.390529 kubelet[1581]: E0913 00:06:59.390184 1581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:59.390723 env[1219]: time="2025-09-13T00:06:59.390669221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:59.393782 kubelet[1581]: E0913 00:06:59.393751 1581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:06:59.395616 env[1219]: time="2025-09-13T00:06:59.395266101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:59.541237 kubelet[1581]: E0913 00:06:59.541189 1581 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="800ms" Sep 13 00:06:59.760238 kubelet[1581]: I0913 00:06:59.758655 1581 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:06:59.762397 kubelet[1581]: E0913 00:06:59.762356 1581 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Sep 13 00:06:59.858350 kubelet[1581]: E0913 00:06:59.858303 1581 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:06:59.921291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756556131.mount: Deactivated successfully. Sep 13 00:06:59.929100 env[1219]: time="2025-09-13T00:06:59.929045581Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:59.931637 env[1219]: time="2025-09-13T00:06:59.931594461Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:59.932730 env[1219]: time="2025-09-13T00:06:59.932670941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:59.935169 env[1219]: time="2025-09-13T00:06:59.935137541Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:59.937090 env[1219]: time="2025-09-13T00:06:59.937060101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:59.938597 env[1219]: time="2025-09-13T00:06:59.938571501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:59.941489 env[1219]: time="2025-09-13T00:06:59.941463861Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:59.942224 env[1219]: time="2025-09-13T00:06:59.942201101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:59.944300 env[1219]: time="2025-09-13T00:06:59.944268261Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:59.945083 env[1219]: time="2025-09-13T00:06:59.945059541Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:59.946993 env[1219]: time="2025-09-13T00:06:59.946834501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:59.948202 env[1219]: time="2025-09-13T00:06:59.948176021Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:59.981305 env[1219]: time="2025-09-13T00:06:59.981215021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:59.981462 env[1219]: time="2025-09-13T00:06:59.981281861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:59.981462 env[1219]: time="2025-09-13T00:06:59.981292541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:59.981533 env[1219]: time="2025-09-13T00:06:59.981495901Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab6a27bf50c1972eb0d7defcc22c226056c2fc1410f0c913daaafbf90c482725 pid=1636 runtime=io.containerd.runc.v2 Sep 13 00:06:59.983243 env[1219]: time="2025-09-13T00:06:59.983177061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:59.983400 env[1219]: time="2025-09-13T00:06:59.983375341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:59.983493 env[1219]: time="2025-09-13T00:06:59.983471341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:59.983752 env[1219]: time="2025-09-13T00:06:59.983722541Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/69ac6fb6b2f5b708470703c734be1df28dd044c046a345cc2566bd86d30b1204 pid=1649 runtime=io.containerd.runc.v2 Sep 13 00:06:59.984948 env[1219]: time="2025-09-13T00:06:59.984873141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:59.984948 env[1219]: time="2025-09-13T00:06:59.984909821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:59.984948 env[1219]: time="2025-09-13T00:06:59.984920221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:59.985147 env[1219]: time="2025-09-13T00:06:59.985078941Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/399caa7e8b2b0aecd174ca6ca034ef4bcfa76f9e9779d6952e0c56cbe5fba3a0 pid=1648 runtime=io.containerd.runc.v2 Sep 13 00:06:59.995333 systemd[1]: Started cri-containerd-ab6a27bf50c1972eb0d7defcc22c226056c2fc1410f0c913daaafbf90c482725.scope. Sep 13 00:07:00.013871 systemd[1]: Started cri-containerd-399caa7e8b2b0aecd174ca6ca034ef4bcfa76f9e9779d6952e0c56cbe5fba3a0.scope. Sep 13 00:07:00.014766 systemd[1]: Started cri-containerd-69ac6fb6b2f5b708470703c734be1df28dd044c046a345cc2566bd86d30b1204.scope. Sep 13 00:07:00.052674 env[1219]: time="2025-09-13T00:07:00.052630861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab6a27bf50c1972eb0d7defcc22c226056c2fc1410f0c913daaafbf90c482725\"" Sep 13 00:07:00.053801 kubelet[1581]: E0913 00:07:00.053599 1581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:00.061728 env[1219]: time="2025-09-13T00:07:00.061682061Z" level=info msg="CreateContainer within sandbox \"ab6a27bf50c1972eb0d7defcc22c226056c2fc1410f0c913daaafbf90c482725\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:07:00.070559 env[1219]: time="2025-09-13T00:07:00.070509181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:837761f8fedf6bab5d8d9589aa9b9fe5,Namespace:kube-system,Attempt:0,} returns sandbox id \"399caa7e8b2b0aecd174ca6ca034ef4bcfa76f9e9779d6952e0c56cbe5fba3a0\"" Sep 13 00:07:00.071374 kubelet[1581]: E0913 00:07:00.071343 1581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:00.072671 env[1219]: time="2025-09-13T00:07:00.072630301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"69ac6fb6b2f5b708470703c734be1df28dd044c046a345cc2566bd86d30b1204\"" Sep 13 00:07:00.074041 kubelet[1581]: E0913 00:07:00.073968 1581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:00.075663 env[1219]: time="2025-09-13T00:07:00.075635381Z" level=info msg="CreateContainer within sandbox \"399caa7e8b2b0aecd174ca6ca034ef4bcfa76f9e9779d6952e0c56cbe5fba3a0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:07:00.077308 env[1219]: time="2025-09-13T00:07:00.077278221Z" level=info msg="CreateContainer within sandbox \"69ac6fb6b2f5b708470703c734be1df28dd044c046a345cc2566bd86d30b1204\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:07:00.078534 env[1219]: time="2025-09-13T00:07:00.078481221Z" level=info msg="CreateContainer within sandbox \"ab6a27bf50c1972eb0d7defcc22c226056c2fc1410f0c913daaafbf90c482725\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"69c85c6e62e2881002a162c2fe2eace3a3e5c58d5c86688ed989236c8f35fdca\"" Sep 13 00:07:00.079409 env[1219]: time="2025-09-13T00:07:00.079354661Z" level=info msg="StartContainer for \"69c85c6e62e2881002a162c2fe2eace3a3e5c58d5c86688ed989236c8f35fdca\"" Sep 13 00:07:00.088395 env[1219]: time="2025-09-13T00:07:00.088358061Z" level=info msg="CreateContainer within sandbox \"399caa7e8b2b0aecd174ca6ca034ef4bcfa76f9e9779d6952e0c56cbe5fba3a0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"67f4146755adba96332c25d775e61a6156be7d27f430b8f53a02b8778068a49b\"" Sep 13 00:07:00.088949 env[1219]: time="2025-09-13T00:07:00.088919461Z" level=info msg="StartContainer for \"67f4146755adba96332c25d775e61a6156be7d27f430b8f53a02b8778068a49b\"" Sep 13 00:07:00.092078 env[1219]: time="2025-09-13T00:07:00.092037181Z" level=info msg="CreateContainer within sandbox \"69ac6fb6b2f5b708470703c734be1df28dd044c046a345cc2566bd86d30b1204\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9bc72860ff317aa345fe33e9924e7c66f187e7e17e46f8cd55ac6dc384f5181e\"" Sep 13 00:07:00.092603 env[1219]: time="2025-09-13T00:07:00.092574061Z" level=info msg="StartContainer for \"9bc72860ff317aa345fe33e9924e7c66f187e7e17e46f8cd55ac6dc384f5181e\"" Sep 13 00:07:00.097539 systemd[1]: Started cri-containerd-69c85c6e62e2881002a162c2fe2eace3a3e5c58d5c86688ed989236c8f35fdca.scope. Sep 13 00:07:00.110985 systemd[1]: Started cri-containerd-67f4146755adba96332c25d775e61a6156be7d27f430b8f53a02b8778068a49b.scope. Sep 13 00:07:00.112920 systemd[1]: Started cri-containerd-9bc72860ff317aa345fe33e9924e7c66f187e7e17e46f8cd55ac6dc384f5181e.scope. Sep 13 00:07:00.147901 env[1219]: time="2025-09-13T00:07:00.147864421Z" level=info msg="StartContainer for \"69c85c6e62e2881002a162c2fe2eace3a3e5c58d5c86688ed989236c8f35fdca\" returns successfully" Sep 13 00:07:00.162266 env[1219]: time="2025-09-13T00:07:00.162211341Z" level=info msg="StartContainer for \"67f4146755adba96332c25d775e61a6156be7d27f430b8f53a02b8778068a49b\" returns successfully" Sep 13 00:07:00.178377 env[1219]: time="2025-09-13T00:07:00.178304221Z" level=info msg="StartContainer for \"9bc72860ff317aa345fe33e9924e7c66f187e7e17e46f8cd55ac6dc384f5181e\" returns successfully" Sep 13 00:07:00.221427 kubelet[1581]: E0913 00:07:00.221376 1581 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:07:00.225345 kubelet[1581]: E0913 00:07:00.225256 1581 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:07:00.563830 kubelet[1581]: I0913 00:07:00.563800 1581 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:07:00.958990 kubelet[1581]: E0913 00:07:00.958848 1581 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:00.958990 kubelet[1581]: E0913 00:07:00.958987 1581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:00.961150 kubelet[1581]: E0913 00:07:00.961103 1581 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:00.961284 kubelet[1581]: E0913 00:07:00.961267 1581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:00.963016 kubelet[1581]: E0913 00:07:00.962865 1581 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:00.963016 kubelet[1581]: E0913 00:07:00.962968 1581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:01.529126 kubelet[1581]: E0913 00:07:01.529069 1581 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:07:01.681035 kubelet[1581]: I0913 00:07:01.681001 1581 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:07:01.681216 kubelet[1581]: E0913 00:07:01.681200 1581 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:07:01.708546 kubelet[1581]: E0913 00:07:01.708468 1581 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:07:01.809184 kubelet[1581]: E0913 00:07:01.809064 1581 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:07:01.910219 kubelet[1581]: E0913 00:07:01.910183 1581 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:07:01.964607 kubelet[1581]: E0913 00:07:01.964577 1581 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:01.964870 kubelet[1581]: E0913 00:07:01.964852 1581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:01.965256 kubelet[1581]: E0913 00:07:01.965237 1581 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:01.965522 kubelet[1581]: E0913 00:07:01.965496 1581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:02.010921 kubelet[1581]: E0913 00:07:02.010895 1581 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:07:02.112194 kubelet[1581]: E0913 00:07:02.112061 1581 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:07:02.213226 kubelet[1581]: E0913 00:07:02.213186 1581 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:07:02.339574 kubelet[1581]: I0913 00:07:02.339536 1581 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:02.356230 kubelet[1581]: E0913 00:07:02.356049 1581 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:02.357051 kubelet[1581]: I0913 00:07:02.356748 1581 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:02.360734 kubelet[1581]: E0913 00:07:02.360707 1581 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:02.360909 kubelet[1581]: I0913 00:07:02.360895 1581 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:07:02.364731 kubelet[1581]: E0913 00:07:02.364593 1581 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:07:02.905197 kubelet[1581]: I0913 00:07:02.905164 1581 apiserver.go:52] "Watching apiserver" Sep 13 00:07:02.938559 kubelet[1581]: I0913 00:07:02.938532 1581 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:07:02.964937 kubelet[1581]: I0913 00:07:02.964909 1581 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:07:02.970149 kubelet[1581]: E0913 00:07:02.970029 1581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:03.957386 systemd[1]: Reloading. Sep 13 00:07:03.968277 kubelet[1581]: E0913 00:07:03.968247 1581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:04.033300 /usr/lib/systemd/system-generators/torcx-generator[1888]: time="2025-09-13T00:07:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:07:04.033331 /usr/lib/systemd/system-generators/torcx-generator[1888]: time="2025-09-13T00:07:04Z" level=info msg="torcx already run" Sep 13 00:07:04.095316 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:07:04.095339 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:07:04.110905 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:04.197773 kubelet[1581]: I0913 00:07:04.197738 1581 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:07:04.198009 systemd[1]: Stopping kubelet.service... Sep 13 00:07:04.217810 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:07:04.218002 systemd[1]: Stopped kubelet.service. Sep 13 00:07:04.218064 systemd[1]: kubelet.service: Consumed 1.247s CPU time. Sep 13 00:07:04.221358 systemd[1]: Starting kubelet.service... Sep 13 00:07:04.321348 systemd[1]: Started kubelet.service. Sep 13 00:07:04.374529 kubelet[1928]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:04.374947 kubelet[1928]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:07:04.374996 kubelet[1928]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:04.375194 kubelet[1928]: I0913 00:07:04.375159 1928 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:07:04.385610 kubelet[1928]: I0913 00:07:04.385482 1928 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:07:04.385610 kubelet[1928]: I0913 00:07:04.385520 1928 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:07:04.385898 kubelet[1928]: I0913 00:07:04.385824 1928 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:07:04.388385 kubelet[1928]: I0913 00:07:04.388358 1928 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:07:04.393008 kubelet[1928]: I0913 00:07:04.392979 1928 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:07:04.395738 kubelet[1928]: E0913 00:07:04.395679 1928 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:07:04.395808 kubelet[1928]: I0913 00:07:04.395740 1928 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:07:04.398335 kubelet[1928]: I0913 00:07:04.398313 1928 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:07:04.398570 kubelet[1928]: I0913 00:07:04.398548 1928 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:07:04.398721 kubelet[1928]: I0913 00:07:04.398572 1928 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:07:04.398802 kubelet[1928]: I0913 00:07:04.398729 1928 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:07:04.398802 kubelet[1928]: I0913 00:07:04.398739 1928 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:07:04.398802 kubelet[1928]: I0913 00:07:04.398784 1928 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:04.398924 kubelet[1928]: I0913 00:07:04.398912 1928 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:07:04.398958 kubelet[1928]: I0913 00:07:04.398928 1928 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:07:04.398958 kubelet[1928]: I0913 00:07:04.398950 1928 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:07:04.399020 kubelet[1928]: I0913 00:07:04.398962 1928 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:07:04.399975 kubelet[1928]: I0913 00:07:04.399948 1928 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:07:04.400622 kubelet[1928]: I0913 00:07:04.400602 1928 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:07:04.402378 kubelet[1928]: I0913 00:07:04.402354 1928 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:07:04.402509 kubelet[1928]: I0913 00:07:04.402497 1928 server.go:1289] "Started kubelet" Sep 13 00:07:04.403310 kubelet[1928]: I0913 00:07:04.403262 1928 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:07:04.405140 kubelet[1928]: I0913 00:07:04.405074 1928 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:07:04.405304 kubelet[1928]: I0913 00:07:04.405255 1928 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:07:04.405590 kubelet[1928]: I0913 00:07:04.405570 1928 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:07:04.407242 kubelet[1928]: I0913 00:07:04.407207 1928 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:07:04.408500 kubelet[1928]: I0913 00:07:04.408463 1928 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:07:04.408703 kubelet[1928]: E0913 00:07:04.408674 1928 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:07:04.409420 kubelet[1928]: I0913 00:07:04.409390 1928 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:07:04.409549 kubelet[1928]: I0913 00:07:04.409531 1928 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:07:04.411654 kubelet[1928]: I0913 00:07:04.411628 1928 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:07:04.415982 kubelet[1928]: I0913 00:07:04.415953 1928 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:07:04.421532 kubelet[1928]: I0913 00:07:04.421495 1928 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:07:04.430124 kubelet[1928]: I0913 00:07:04.430034 1928 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:07:04.430924 kubelet[1928]: I0913 00:07:04.430900 1928 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:07:04.438733 kubelet[1928]: E0913 00:07:04.438703 1928 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:07:04.444008 kubelet[1928]: I0913 00:07:04.443973 1928 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:07:04.444008 kubelet[1928]: I0913 00:07:04.444000 1928 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:07:04.444182 kubelet[1928]: I0913 00:07:04.444019 1928 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:07:04.444182 kubelet[1928]: I0913 00:07:04.444027 1928 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:07:04.444182 kubelet[1928]: E0913 00:07:04.444084 1928 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:07:04.464140 kubelet[1928]: I0913 00:07:04.464084 1928 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:07:04.464140 kubelet[1928]: I0913 00:07:04.464135 1928 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:07:04.464272 kubelet[1928]: I0913 00:07:04.464158 1928 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:04.464315 kubelet[1928]: I0913 00:07:04.464298 1928 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:07:04.464341 kubelet[1928]: I0913 00:07:04.464309 1928 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:07:04.464341 kubelet[1928]: I0913 00:07:04.464326 1928 policy_none.go:49] "None policy: Start" Sep 13 00:07:04.464341 kubelet[1928]: I0913 00:07:04.464335 1928 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:07:04.464406 kubelet[1928]: I0913 00:07:04.464343 1928 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:07:04.464427 kubelet[1928]: I0913 00:07:04.464419 1928 state_mem.go:75] "Updated machine memory state" Sep 13 00:07:04.468553 kubelet[1928]: E0913 00:07:04.468469 1928 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:07:04.468657 kubelet[1928]: I0913 00:07:04.468641 1928 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:07:04.468685 kubelet[1928]: I0913 00:07:04.468660 1928 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:07:04.469585 kubelet[1928]: I0913 00:07:04.469188 1928 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:07:04.470078 kubelet[1928]: E0913 00:07:04.469951 1928 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:07:04.545582 kubelet[1928]: I0913 00:07:04.545548 1928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:07:04.545702 kubelet[1928]: I0913 00:07:04.545601 1928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:04.546559 kubelet[1928]: I0913 00:07:04.546505 1928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:04.557617 kubelet[1928]: E0913 00:07:04.557531 1928 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:07:04.573196 kubelet[1928]: I0913 00:07:04.573161 1928 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:07:04.584680 kubelet[1928]: I0913 00:07:04.584641 1928 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 00:07:04.584795 kubelet[1928]: I0913 00:07:04.584746 1928 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:07:04.711804 kubelet[1928]: I0913 00:07:04.711728 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/837761f8fedf6bab5d8d9589aa9b9fe5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"837761f8fedf6bab5d8d9589aa9b9fe5\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:04.712210 kubelet[1928]: I0913 00:07:04.712190 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:04.712358 kubelet[1928]: I0913 00:07:04.712339 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:04.712474 kubelet[1928]: I0913 00:07:04.712455 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/837761f8fedf6bab5d8d9589aa9b9fe5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"837761f8fedf6bab5d8d9589aa9b9fe5\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:04.712563 kubelet[1928]: I0913 00:07:04.712550 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:04.712671 kubelet[1928]: I0913 00:07:04.712658 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:04.712764 kubelet[1928]: I0913 00:07:04.712753 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:04.712860 kubelet[1928]: I0913 00:07:04.712847 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:07:04.712926 kubelet[1928]: I0913 00:07:04.712914 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/837761f8fedf6bab5d8d9589aa9b9fe5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"837761f8fedf6bab5d8d9589aa9b9fe5\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:04.856366 kubelet[1928]: E0913 00:07:04.856324 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:04.858507 kubelet[1928]: E0913 00:07:04.858484 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:04.858831 kubelet[1928]: E0913 00:07:04.858756 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:04.950479 sudo[1967]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:07:04.950855 sudo[1967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:07:05.400295 kubelet[1928]: I0913 00:07:05.400208 1928 apiserver.go:52] "Watching apiserver" Sep 13 00:07:05.409799 kubelet[1928]: I0913 00:07:05.409754 1928 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:07:05.455421 kubelet[1928]: I0913 00:07:05.455380 1928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:05.455559 kubelet[1928]: I0913 00:07:05.455457 1928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:07:05.456466 sudo[1967]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:05.457550 kubelet[1928]: E0913 00:07:05.457236 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:05.467043 kubelet[1928]: E0913 00:07:05.466999 1928 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:05.467274 kubelet[1928]: E0913 00:07:05.467203 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:05.467397 kubelet[1928]: E0913 00:07:05.467276 1928 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:07:05.472532 kubelet[1928]: E0913 00:07:05.472477 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:05.489370 kubelet[1928]: I0913 00:07:05.489257 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.489239181 podStartE2EDuration="3.489239181s" podCreationTimestamp="2025-09-13 00:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:05.477333181 +0000 UTC m=+1.149516521" watchObservedRunningTime="2025-09-13 00:07:05.489239181 +0000 UTC m=+1.161422521" Sep 13 00:07:05.504082 kubelet[1928]: I0913 00:07:05.504017 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5039905409999998 podStartE2EDuration="1.503990541s" podCreationTimestamp="2025-09-13 00:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:05.489575861 +0000 UTC m=+1.161759241" watchObservedRunningTime="2025-09-13 00:07:05.503990541 +0000 UTC m=+1.176173921" Sep 13 00:07:05.516445 kubelet[1928]: I0913 00:07:05.516386 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.516371261 podStartE2EDuration="1.516371261s" podCreationTimestamp="2025-09-13 00:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:05.504659821 +0000 UTC m=+1.176843201" watchObservedRunningTime="2025-09-13 00:07:05.516371261 +0000 UTC m=+1.188554641" Sep 13 00:07:06.457918 kubelet[1928]: E0913 00:07:06.457879 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:06.458392 kubelet[1928]: E0913 00:07:06.457886 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:07.356002 sudo[1321]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:07.358603 sshd[1317]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:07.365690 systemd[1]: sshd@4-10.0.0.24:22-10.0.0.1:49354.service: Deactivated successfully. Sep 13 00:07:07.366459 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:07:07.366619 systemd[1]: session-5.scope: Consumed 7.529s CPU time. Sep 13 00:07:07.367432 systemd-logind[1207]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:07:07.368255 systemd-logind[1207]: Removed session 5. Sep 13 00:07:07.459860 kubelet[1928]: E0913 00:07:07.459814 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:07.460329 kubelet[1928]: E0913 00:07:07.460295 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:10.753386 kubelet[1928]: I0913 00:07:10.753337 1928 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:07:10.753854 env[1219]: time="2025-09-13T00:07:10.753797154Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:07:10.754063 kubelet[1928]: I0913 00:07:10.754025 1928 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:07:10.762037 kubelet[1928]: E0913 00:07:10.761982 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:11.453345 systemd[1]: Created slice kubepods-besteffort-pod96667941_42fa_417e_a665_9d1a66d5f6fb.slice. Sep 13 00:07:11.462389 kubelet[1928]: I0913 00:07:11.462355 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/96667941-42fa-417e-a665-9d1a66d5f6fb-kube-proxy\") pod \"kube-proxy-zhj77\" (UID: \"96667941-42fa-417e-a665-9d1a66d5f6fb\") " pod="kube-system/kube-proxy-zhj77" Sep 13 00:07:11.462389 kubelet[1928]: I0913 00:07:11.462391 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96667941-42fa-417e-a665-9d1a66d5f6fb-xtables-lock\") pod \"kube-proxy-zhj77\" (UID: \"96667941-42fa-417e-a665-9d1a66d5f6fb\") " pod="kube-system/kube-proxy-zhj77" Sep 13 00:07:11.462563 kubelet[1928]: I0913 00:07:11.462408 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96667941-42fa-417e-a665-9d1a66d5f6fb-lib-modules\") pod \"kube-proxy-zhj77\" (UID: \"96667941-42fa-417e-a665-9d1a66d5f6fb\") " pod="kube-system/kube-proxy-zhj77" Sep 13 00:07:11.462563 kubelet[1928]: I0913 00:07:11.462425 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdftx\" (UniqueName: \"kubernetes.io/projected/96667941-42fa-417e-a665-9d1a66d5f6fb-kube-api-access-zdftx\") pod \"kube-proxy-zhj77\" (UID: \"96667941-42fa-417e-a665-9d1a66d5f6fb\") " pod="kube-system/kube-proxy-zhj77" Sep 13 00:07:11.466246 systemd[1]: Created slice kubepods-burstable-podc1982f8e_190f_4964_8349_227a7b0fc2e6.slice. Sep 13 00:07:11.468416 kubelet[1928]: E0913 00:07:11.468381 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:11.563512 kubelet[1928]: I0913 00:07:11.563460 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-cni-path\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.563512 kubelet[1928]: I0913 00:07:11.563503 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1982f8e-190f-4964-8349-227a7b0fc2e6-clustermesh-secrets\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.563690 kubelet[1928]: I0913 00:07:11.563536 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-host-proc-sys-net\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.563690 kubelet[1928]: I0913 00:07:11.563569 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-hostproc\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.563690 kubelet[1928]: I0913 00:07:11.563589 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-cilium-cgroup\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.563690 kubelet[1928]: I0913 00:07:11.563616 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-etc-cni-netd\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.563690 kubelet[1928]: I0913 00:07:11.563631 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-lib-modules\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.563690 kubelet[1928]: I0913 00:07:11.563653 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-xtables-lock\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.563841 kubelet[1928]: I0913 00:07:11.563685 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1982f8e-190f-4964-8349-227a7b0fc2e6-cilium-config-path\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.563841 kubelet[1928]: I0913 00:07:11.563729 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-cilium-run\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.563841 kubelet[1928]: I0913 00:07:11.563768 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-host-proc-sys-kernel\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.563841 kubelet[1928]: I0913 00:07:11.563785 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1982f8e-190f-4964-8349-227a7b0fc2e6-hubble-tls\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.563841 kubelet[1928]: I0913 00:07:11.563802 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-bpf-maps\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.563841 kubelet[1928]: I0913 00:07:11.563818 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk2td\" (UniqueName: \"kubernetes.io/projected/c1982f8e-190f-4964-8349-227a7b0fc2e6-kube-api-access-sk2td\") pod \"cilium-jwjsp\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " pod="kube-system/cilium-jwjsp" Sep 13 00:07:11.573499 kubelet[1928]: I0913 00:07:11.573445 1928 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:07:11.763268 kubelet[1928]: E0913 00:07:11.763167 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:11.764528 env[1219]: time="2025-09-13T00:07:11.764485824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhj77,Uid:96667941-42fa-417e-a665-9d1a66d5f6fb,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:11.769902 kubelet[1928]: E0913 00:07:11.769851 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:11.770395 env[1219]: time="2025-09-13T00:07:11.770354013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jwjsp,Uid:c1982f8e-190f-4964-8349-227a7b0fc2e6,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:11.791245 env[1219]: time="2025-09-13T00:07:11.790994677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:11.791245 env[1219]: time="2025-09-13T00:07:11.791121960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:11.791245 env[1219]: time="2025-09-13T00:07:11.791152560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:11.792597 env[1219]: time="2025-09-13T00:07:11.792503385Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c0f590262c088a35f212117c0e2bfe2abd8555ec0bac20923dda9349a331203 pid=2024 runtime=io.containerd.runc.v2 Sep 13 00:07:11.802530 env[1219]: time="2025-09-13T00:07:11.802277567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:11.802530 env[1219]: time="2025-09-13T00:07:11.802486491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:11.802530 env[1219]: time="2025-09-13T00:07:11.802499331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:11.802814 env[1219]: time="2025-09-13T00:07:11.802783976Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f pid=2045 runtime=io.containerd.runc.v2 Sep 13 00:07:11.810199 systemd[1]: Started cri-containerd-5c0f590262c088a35f212117c0e2bfe2abd8555ec0bac20923dda9349a331203.scope. Sep 13 00:07:11.829645 systemd[1]: Started cri-containerd-7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f.scope. Sep 13 00:07:11.859035 env[1219]: time="2025-09-13T00:07:11.858986982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhj77,Uid:96667941-42fa-417e-a665-9d1a66d5f6fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c0f590262c088a35f212117c0e2bfe2abd8555ec0bac20923dda9349a331203\"" Sep 13 00:07:11.861045 kubelet[1928]: E0913 00:07:11.861020 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:11.867712 env[1219]: time="2025-09-13T00:07:11.867557622Z" level=info msg="CreateContainer within sandbox \"5c0f590262c088a35f212117c0e2bfe2abd8555ec0bac20923dda9349a331203\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:07:11.871545 env[1219]: time="2025-09-13T00:07:11.871502375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jwjsp,Uid:c1982f8e-190f-4964-8349-227a7b0fc2e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\"" Sep 13 00:07:11.872761 kubelet[1928]: E0913 00:07:11.872553 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:11.874778 env[1219]: time="2025-09-13T00:07:11.873959581Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:07:11.889957 env[1219]: time="2025-09-13T00:07:11.889900597Z" level=info msg="CreateContainer within sandbox \"5c0f590262c088a35f212117c0e2bfe2abd8555ec0bac20923dda9349a331203\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5789430ff62269b6b1a8da00843422caa331e599b89969a77e9e94d918f9d4ab\"" Sep 13 00:07:11.890725 env[1219]: time="2025-09-13T00:07:11.890681052Z" level=info msg="StartContainer for \"5789430ff62269b6b1a8da00843422caa331e599b89969a77e9e94d918f9d4ab\"" Sep 13 00:07:11.907546 systemd[1]: Started cri-containerd-5789430ff62269b6b1a8da00843422caa331e599b89969a77e9e94d918f9d4ab.scope. Sep 13 00:07:11.935949 systemd[1]: Created slice kubepods-besteffort-pod4695fdbe_8eba_4e3c_864b_932851ceb7e2.slice. Sep 13 00:07:11.963940 env[1219]: time="2025-09-13T00:07:11.963861893Z" level=info msg="StartContainer for \"5789430ff62269b6b1a8da00843422caa331e599b89969a77e9e94d918f9d4ab\" returns successfully" Sep 13 00:07:11.967315 kubelet[1928]: I0913 00:07:11.967275 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4695fdbe-8eba-4e3c-864b-932851ceb7e2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-679nd\" (UID: \"4695fdbe-8eba-4e3c-864b-932851ceb7e2\") " pod="kube-system/cilium-operator-6c4d7847fc-679nd" Sep 13 00:07:11.967444 kubelet[1928]: I0913 00:07:11.967327 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h48hh\" (UniqueName: \"kubernetes.io/projected/4695fdbe-8eba-4e3c-864b-932851ceb7e2-kube-api-access-h48hh\") pod \"cilium-operator-6c4d7847fc-679nd\" (UID: \"4695fdbe-8eba-4e3c-864b-932851ceb7e2\") " pod="kube-system/cilium-operator-6c4d7847fc-679nd" Sep 13 00:07:12.238801 kubelet[1928]: E0913 00:07:12.238492 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:12.239008 env[1219]: time="2025-09-13T00:07:12.238964455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-679nd,Uid:4695fdbe-8eba-4e3c-864b-932851ceb7e2,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:12.253710 env[1219]: time="2025-09-13T00:07:12.253625670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:12.253710 env[1219]: time="2025-09-13T00:07:12.253680671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:12.253928 env[1219]: time="2025-09-13T00:07:12.253691871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:12.254486 env[1219]: time="2025-09-13T00:07:12.254431844Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e01b06a354989facf4c3ba4e5a7b7a0772fcbd72c1567b73be37bc412596a732 pid=2194 runtime=io.containerd.runc.v2 Sep 13 00:07:12.266047 systemd[1]: Started cri-containerd-e01b06a354989facf4c3ba4e5a7b7a0772fcbd72c1567b73be37bc412596a732.scope. Sep 13 00:07:12.301354 env[1219]: time="2025-09-13T00:07:12.301308822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-679nd,Uid:4695fdbe-8eba-4e3c-864b-932851ceb7e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e01b06a354989facf4c3ba4e5a7b7a0772fcbd72c1567b73be37bc412596a732\"" Sep 13 00:07:12.302658 kubelet[1928]: E0913 00:07:12.302168 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:12.473523 kubelet[1928]: E0913 00:07:12.473479 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:13.729887 kubelet[1928]: E0913 00:07:13.729851 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:13.766594 kubelet[1928]: I0913 00:07:13.766538 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zhj77" podStartSLOduration=2.766522943 podStartE2EDuration="2.766522943s" podCreationTimestamp="2025-09-13 00:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:12.482850668 +0000 UTC m=+8.155034048" watchObservedRunningTime="2025-09-13 00:07:13.766522943 +0000 UTC m=+9.438706323" Sep 13 00:07:14.476378 kubelet[1928]: E0913 00:07:14.476292 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:15.480486 kubelet[1928]: E0913 00:07:15.478996 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:16.458886 kubelet[1928]: E0913 00:07:16.458855 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:17.435010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2334183218.mount: Deactivated successfully. Sep 13 00:07:18.409438 update_engine[1208]: I0913 00:07:18.409384 1208 update_attempter.cc:509] Updating boot flags... Sep 13 00:07:19.973277 env[1219]: time="2025-09-13T00:07:19.973218216Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:19.976391 env[1219]: time="2025-09-13T00:07:19.976349771Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:19.978929 env[1219]: time="2025-09-13T00:07:19.978882599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:19.979828 env[1219]: time="2025-09-13T00:07:19.979795570Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 13 00:07:19.981812 env[1219]: time="2025-09-13T00:07:19.981778152Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:07:19.990562 env[1219]: time="2025-09-13T00:07:19.990491648Z" level=info msg="CreateContainer within sandbox \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:07:20.008632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740533913.mount: Deactivated successfully. Sep 13 00:07:20.018463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4209221955.mount: Deactivated successfully. Sep 13 00:07:20.020079 env[1219]: time="2025-09-13T00:07:20.020027563Z" level=info msg="CreateContainer within sandbox \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"22d13c0c277efb5c849382ad39b1ef85d34d337817394ed9e2b66ae8047e4a52\"" Sep 13 00:07:20.021030 env[1219]: time="2025-09-13T00:07:20.020997493Z" level=info msg="StartContainer for \"22d13c0c277efb5c849382ad39b1ef85d34d337817394ed9e2b66ae8047e4a52\"" Sep 13 00:07:20.037776 systemd[1]: Started cri-containerd-22d13c0c277efb5c849382ad39b1ef85d34d337817394ed9e2b66ae8047e4a52.scope. Sep 13 00:07:20.069280 env[1219]: time="2025-09-13T00:07:20.069228755Z" level=info msg="StartContainer for \"22d13c0c277efb5c849382ad39b1ef85d34d337817394ed9e2b66ae8047e4a52\" returns successfully" Sep 13 00:07:20.081902 systemd[1]: cri-containerd-22d13c0c277efb5c849382ad39b1ef85d34d337817394ed9e2b66ae8047e4a52.scope: Deactivated successfully. Sep 13 00:07:20.157862 env[1219]: time="2025-09-13T00:07:20.157813477Z" level=info msg="shim disconnected" id=22d13c0c277efb5c849382ad39b1ef85d34d337817394ed9e2b66ae8047e4a52 Sep 13 00:07:20.157862 env[1219]: time="2025-09-13T00:07:20.157859758Z" level=warning msg="cleaning up after shim disconnected" id=22d13c0c277efb5c849382ad39b1ef85d34d337817394ed9e2b66ae8047e4a52 namespace=k8s.io Sep 13 00:07:20.157862 env[1219]: time="2025-09-13T00:07:20.157870358Z" level=info msg="cleaning up dead shim" Sep 13 00:07:20.169133 env[1219]: time="2025-09-13T00:07:20.169068114Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2373 runtime=io.containerd.runc.v2\n" Sep 13 00:07:20.490452 kubelet[1928]: E0913 00:07:20.490393 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:20.496980 env[1219]: time="2025-09-13T00:07:20.496935966Z" level=info msg="CreateContainer within sandbox \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:07:20.510613 env[1219]: time="2025-09-13T00:07:20.510532308Z" level=info msg="CreateContainer within sandbox \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c232b5968c5a2215869c81ab10dd50e267e1e5edaf6de6470eedcf48148c5c8d\"" Sep 13 00:07:20.511384 env[1219]: time="2025-09-13T00:07:20.511344556Z" level=info msg="StartContainer for \"c232b5968c5a2215869c81ab10dd50e267e1e5edaf6de6470eedcf48148c5c8d\"" Sep 13 00:07:20.526492 systemd[1]: Started cri-containerd-c232b5968c5a2215869c81ab10dd50e267e1e5edaf6de6470eedcf48148c5c8d.scope. Sep 13 00:07:20.556758 env[1219]: time="2025-09-13T00:07:20.556702029Z" level=info msg="StartContainer for \"c232b5968c5a2215869c81ab10dd50e267e1e5edaf6de6470eedcf48148c5c8d\" returns successfully" Sep 13 00:07:20.568496 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:07:20.568740 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:07:20.568916 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:07:20.570622 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:07:20.573019 systemd[1]: cri-containerd-c232b5968c5a2215869c81ab10dd50e267e1e5edaf6de6470eedcf48148c5c8d.scope: Deactivated successfully. Sep 13 00:07:20.593446 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:07:20.597240 env[1219]: time="2025-09-13T00:07:20.597190410Z" level=info msg="shim disconnected" id=c232b5968c5a2215869c81ab10dd50e267e1e5edaf6de6470eedcf48148c5c8d Sep 13 00:07:20.597240 env[1219]: time="2025-09-13T00:07:20.597237370Z" level=warning msg="cleaning up after shim disconnected" id=c232b5968c5a2215869c81ab10dd50e267e1e5edaf6de6470eedcf48148c5c8d namespace=k8s.io Sep 13 00:07:20.597419 env[1219]: time="2025-09-13T00:07:20.597248611Z" level=info msg="cleaning up dead shim" Sep 13 00:07:20.607150 env[1219]: time="2025-09-13T00:07:20.607087113Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2439 runtime=io.containerd.runc.v2\n" Sep 13 00:07:21.003390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22d13c0c277efb5c849382ad39b1ef85d34d337817394ed9e2b66ae8047e4a52-rootfs.mount: Deactivated successfully. Sep 13 00:07:21.494912 kubelet[1928]: E0913 00:07:21.494878 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:21.503866 env[1219]: time="2025-09-13T00:07:21.503809158Z" level=info msg="CreateContainer within sandbox \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:07:21.518299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3358389529.mount: Deactivated successfully. Sep 13 00:07:21.521988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545810231.mount: Deactivated successfully. Sep 13 00:07:21.526463 env[1219]: time="2025-09-13T00:07:21.526413939Z" level=info msg="CreateContainer within sandbox \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d8a58de2927a564331b2007c52ee4064850a30701c53456a0655e1fcc275798\"" Sep 13 00:07:21.526946 env[1219]: time="2025-09-13T00:07:21.526893103Z" level=info msg="StartContainer for \"8d8a58de2927a564331b2007c52ee4064850a30701c53456a0655e1fcc275798\"" Sep 13 00:07:21.542324 systemd[1]: Started cri-containerd-8d8a58de2927a564331b2007c52ee4064850a30701c53456a0655e1fcc275798.scope. Sep 13 00:07:21.583811 systemd[1]: cri-containerd-8d8a58de2927a564331b2007c52ee4064850a30701c53456a0655e1fcc275798.scope: Deactivated successfully. Sep 13 00:07:21.597210 env[1219]: time="2025-09-13T00:07:21.597162749Z" level=info msg="StartContainer for \"8d8a58de2927a564331b2007c52ee4064850a30701c53456a0655e1fcc275798\" returns successfully" Sep 13 00:07:21.641559 env[1219]: time="2025-09-13T00:07:21.641513822Z" level=info msg="shim disconnected" id=8d8a58de2927a564331b2007c52ee4064850a30701c53456a0655e1fcc275798 Sep 13 00:07:21.641811 env[1219]: time="2025-09-13T00:07:21.641791664Z" level=warning msg="cleaning up after shim disconnected" id=8d8a58de2927a564331b2007c52ee4064850a30701c53456a0655e1fcc275798 namespace=k8s.io Sep 13 00:07:21.641871 env[1219]: time="2025-09-13T00:07:21.641859025Z" level=info msg="cleaning up dead shim" Sep 13 00:07:21.648624 env[1219]: time="2025-09-13T00:07:21.648579171Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2495 runtime=io.containerd.runc.v2\n" Sep 13 00:07:21.740743 env[1219]: time="2025-09-13T00:07:21.740692429Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:21.743143 env[1219]: time="2025-09-13T00:07:21.743094093Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:21.744684 env[1219]: time="2025-09-13T00:07:21.744648148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:07:21.746318 env[1219]: time="2025-09-13T00:07:21.745747839Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 13 00:07:21.752031 env[1219]: time="2025-09-13T00:07:21.751968019Z" level=info msg="CreateContainer within sandbox \"e01b06a354989facf4c3ba4e5a7b7a0772fcbd72c1567b73be37bc412596a732\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:07:21.764409 env[1219]: time="2025-09-13T00:07:21.764348620Z" level=info msg="CreateContainer within sandbox \"e01b06a354989facf4c3ba4e5a7b7a0772fcbd72c1567b73be37bc412596a732\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\"" Sep 13 00:07:21.764893 env[1219]: time="2025-09-13T00:07:21.764867745Z" level=info msg="StartContainer for \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\"" Sep 13 00:07:21.779299 systemd[1]: Started cri-containerd-7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed.scope. Sep 13 00:07:21.823546 env[1219]: time="2025-09-13T00:07:21.823477837Z" level=info msg="StartContainer for \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\" returns successfully" Sep 13 00:07:22.498705 kubelet[1928]: E0913 00:07:22.498673 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:22.501620 kubelet[1928]: E0913 00:07:22.501593 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:22.505777 env[1219]: time="2025-09-13T00:07:22.505731386Z" level=info msg="CreateContainer within sandbox \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:07:22.524101 env[1219]: time="2025-09-13T00:07:22.524003633Z" level=info msg="CreateContainer within sandbox \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19\"" Sep 13 00:07:22.524728 env[1219]: time="2025-09-13T00:07:22.524700519Z" level=info msg="StartContainer for \"4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19\"" Sep 13 00:07:22.551249 kubelet[1928]: I0913 00:07:22.551188 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-679nd" podStartSLOduration=2.107846929 podStartE2EDuration="11.551166321s" podCreationTimestamp="2025-09-13 00:07:11 +0000 UTC" firstStartedPulling="2025-09-13 00:07:12.303075693 +0000 UTC m=+7.975259073" lastFinishedPulling="2025-09-13 00:07:21.746395085 +0000 UTC m=+17.418578465" observedRunningTime="2025-09-13 00:07:22.512851571 +0000 UTC m=+18.185034991" watchObservedRunningTime="2025-09-13 00:07:22.551166321 +0000 UTC m=+18.223349781" Sep 13 00:07:22.573323 systemd[1]: Started cri-containerd-4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19.scope. Sep 13 00:07:22.605268 systemd[1]: cri-containerd-4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19.scope: Deactivated successfully. Sep 13 00:07:22.606415 env[1219]: time="2025-09-13T00:07:22.606336066Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1982f8e_190f_4964_8349_227a7b0fc2e6.slice/cri-containerd-4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19.scope/memory.events\": no such file or directory" Sep 13 00:07:22.608190 env[1219]: time="2025-09-13T00:07:22.608153243Z" level=info msg="StartContainer for \"4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19\" returns successfully" Sep 13 00:07:22.628805 env[1219]: time="2025-09-13T00:07:22.628759471Z" level=info msg="shim disconnected" id=4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19 Sep 13 00:07:22.628805 env[1219]: time="2025-09-13T00:07:22.628802872Z" level=warning msg="cleaning up after shim disconnected" id=4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19 namespace=k8s.io Sep 13 00:07:22.629015 env[1219]: time="2025-09-13T00:07:22.628814672Z" level=info msg="cleaning up dead shim" Sep 13 00:07:22.635263 env[1219]: time="2025-09-13T00:07:22.635212690Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2591 runtime=io.containerd.runc.v2\n" Sep 13 00:07:23.002413 systemd[1]: run-containerd-runc-k8s.io-4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19-runc.oW7Hok.mount: Deactivated successfully. Sep 13 00:07:23.002506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19-rootfs.mount: Deactivated successfully. Sep 13 00:07:23.505397 kubelet[1928]: E0913 00:07:23.505361 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:23.505954 kubelet[1928]: E0913 00:07:23.505935 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:23.510341 env[1219]: time="2025-09-13T00:07:23.510268043Z" level=info msg="CreateContainer within sandbox \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:07:23.524216 env[1219]: time="2025-09-13T00:07:23.524165363Z" level=info msg="CreateContainer within sandbox \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce\"" Sep 13 00:07:23.524979 env[1219]: time="2025-09-13T00:07:23.524930809Z" level=info msg="StartContainer for \"1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce\"" Sep 13 00:07:23.547405 systemd[1]: Started cri-containerd-1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce.scope. Sep 13 00:07:23.578938 env[1219]: time="2025-09-13T00:07:23.578891272Z" level=info msg="StartContainer for \"1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce\" returns successfully" Sep 13 00:07:23.728171 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 00:07:23.744152 kubelet[1928]: I0913 00:07:23.744080 1928 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:07:23.787803 systemd[1]: Created slice kubepods-burstable-podd01a60f8_01d4_4d5c_acf6_3de948382429.slice. Sep 13 00:07:23.792559 systemd[1]: Created slice kubepods-burstable-pod32f57652_3930_41b3_ad27_24a62bcf0f4d.slice. Sep 13 00:07:23.881674 kubelet[1928]: I0913 00:07:23.881631 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-828mx\" (UniqueName: \"kubernetes.io/projected/d01a60f8-01d4-4d5c-acf6-3de948382429-kube-api-access-828mx\") pod \"coredns-674b8bbfcf-bvtzc\" (UID: \"d01a60f8-01d4-4d5c-acf6-3de948382429\") " pod="kube-system/coredns-674b8bbfcf-bvtzc" Sep 13 00:07:23.881674 kubelet[1928]: I0913 00:07:23.881673 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8b46\" (UniqueName: \"kubernetes.io/projected/32f57652-3930-41b3-ad27-24a62bcf0f4d-kube-api-access-b8b46\") pod \"coredns-674b8bbfcf-8jxn7\" (UID: \"32f57652-3930-41b3-ad27-24a62bcf0f4d\") " pod="kube-system/coredns-674b8bbfcf-8jxn7" Sep 13 00:07:23.881925 kubelet[1928]: I0913 00:07:23.881698 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32f57652-3930-41b3-ad27-24a62bcf0f4d-config-volume\") pod \"coredns-674b8bbfcf-8jxn7\" (UID: \"32f57652-3930-41b3-ad27-24a62bcf0f4d\") " pod="kube-system/coredns-674b8bbfcf-8jxn7" Sep 13 00:07:23.881925 kubelet[1928]: I0913 00:07:23.881725 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d01a60f8-01d4-4d5c-acf6-3de948382429-config-volume\") pod \"coredns-674b8bbfcf-bvtzc\" (UID: \"d01a60f8-01d4-4d5c-acf6-3de948382429\") " pod="kube-system/coredns-674b8bbfcf-bvtzc" Sep 13 00:07:23.978146 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 00:07:24.005548 systemd[1]: run-containerd-runc-k8s.io-1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce-runc.8woEA9.mount: Deactivated successfully. Sep 13 00:07:24.090958 kubelet[1928]: E0913 00:07:24.090852 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:24.092253 env[1219]: time="2025-09-13T00:07:24.091880142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bvtzc,Uid:d01a60f8-01d4-4d5c-acf6-3de948382429,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:24.096034 kubelet[1928]: E0913 00:07:24.096006 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:24.097039 env[1219]: time="2025-09-13T00:07:24.096994863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8jxn7,Uid:32f57652-3930-41b3-ad27-24a62bcf0f4d,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:24.510139 kubelet[1928]: E0913 00:07:24.509991 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:24.526594 kubelet[1928]: I0913 00:07:24.526439 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jwjsp" podStartSLOduration=5.418345777 podStartE2EDuration="13.526422435s" podCreationTimestamp="2025-09-13 00:07:11 +0000 UTC" firstStartedPulling="2025-09-13 00:07:11.873460731 +0000 UTC m=+7.545644071" lastFinishedPulling="2025-09-13 00:07:19.981537349 +0000 UTC m=+15.653720729" observedRunningTime="2025-09-13 00:07:24.525526988 +0000 UTC m=+20.197710368" watchObservedRunningTime="2025-09-13 00:07:24.526422435 +0000 UTC m=+20.198605815" Sep 13 00:07:25.511227 kubelet[1928]: E0913 00:07:25.511196 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:25.596818 systemd-networkd[1051]: cilium_host: Link UP Sep 13 00:07:25.596928 systemd-networkd[1051]: cilium_net: Link UP Sep 13 00:07:25.597458 systemd-networkd[1051]: cilium_net: Gained carrier Sep 13 00:07:25.598082 systemd-networkd[1051]: cilium_host: Gained carrier Sep 13 00:07:25.598235 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:07:25.598273 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:07:25.675659 systemd-networkd[1051]: cilium_vxlan: Link UP Sep 13 00:07:25.675665 systemd-networkd[1051]: cilium_vxlan: Gained carrier Sep 13 00:07:25.690247 systemd-networkd[1051]: cilium_host: Gained IPv6LL Sep 13 00:07:25.931148 kernel: NET: Registered PF_ALG protocol family Sep 13 00:07:26.512727 kubelet[1928]: E0913 00:07:26.512682 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:26.560574 systemd-networkd[1051]: lxc_health: Link UP Sep 13 00:07:26.578235 systemd-networkd[1051]: cilium_net: Gained IPv6LL Sep 13 00:07:26.581152 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:07:26.581097 systemd-networkd[1051]: lxc_health: Gained carrier Sep 13 00:07:27.142790 systemd-networkd[1051]: lxc8999176fa298: Link UP Sep 13 00:07:27.152208 kernel: eth0: renamed from tmp6cb38 Sep 13 00:07:27.158304 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:07:27.158395 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8999176fa298: link becomes ready Sep 13 00:07:27.158135 systemd-networkd[1051]: lxc8999176fa298: Gained carrier Sep 13 00:07:27.158864 systemd-networkd[1051]: lxcf7af95c9ce40: Link UP Sep 13 00:07:27.165310 kernel: eth0: renamed from tmpbfa5b Sep 13 00:07:27.175226 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf7af95c9ce40: link becomes ready Sep 13 00:07:27.176001 systemd-networkd[1051]: lxcf7af95c9ce40: Gained carrier Sep 13 00:07:27.418232 systemd-networkd[1051]: cilium_vxlan: Gained IPv6LL Sep 13 00:07:27.729611 kubelet[1928]: E0913 00:07:27.729500 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:27.795239 systemd-networkd[1051]: lxc_health: Gained IPv6LL Sep 13 00:07:28.306243 systemd-networkd[1051]: lxcf7af95c9ce40: Gained IPv6LL Sep 13 00:07:28.515467 kubelet[1928]: E0913 00:07:28.515437 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:29.010238 systemd-networkd[1051]: lxc8999176fa298: Gained IPv6LL Sep 13 00:07:29.516947 kubelet[1928]: E0913 00:07:29.516912 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:30.847192 env[1219]: time="2025-09-13T00:07:30.847026207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:30.847192 env[1219]: time="2025-09-13T00:07:30.847066367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:30.847192 env[1219]: time="2025-09-13T00:07:30.847076208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:30.847725 env[1219]: time="2025-09-13T00:07:30.847669051Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfa5b4d155320dea2f31667140bd055b980ea2055313b44f06bf62eae320dacc pid=3174 runtime=io.containerd.runc.v2 Sep 13 00:07:30.853468 env[1219]: time="2025-09-13T00:07:30.853388722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:30.853468 env[1219]: time="2025-09-13T00:07:30.853440002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:30.853641 env[1219]: time="2025-09-13T00:07:30.853451002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:30.853925 env[1219]: time="2025-09-13T00:07:30.853873765Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6cb3804cccd2aa1335bfc4f608de7dcb09394b285fd5d52a63f466c975864e38 pid=3183 runtime=io.containerd.runc.v2 Sep 13 00:07:30.864287 systemd[1]: run-containerd-runc-k8s.io-bfa5b4d155320dea2f31667140bd055b980ea2055313b44f06bf62eae320dacc-runc.OYNDgA.mount: Deactivated successfully. Sep 13 00:07:30.867056 systemd[1]: Started cri-containerd-bfa5b4d155320dea2f31667140bd055b980ea2055313b44f06bf62eae320dacc.scope. Sep 13 00:07:30.871938 systemd[1]: Started cri-containerd-6cb3804cccd2aa1335bfc4f608de7dcb09394b285fd5d52a63f466c975864e38.scope. Sep 13 00:07:30.894298 systemd-resolved[1162]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:07:30.895369 systemd-resolved[1162]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:07:30.915882 env[1219]: time="2025-09-13T00:07:30.915838143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8jxn7,Uid:32f57652-3930-41b3-ad27-24a62bcf0f4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfa5b4d155320dea2f31667140bd055b980ea2055313b44f06bf62eae320dacc\"" Sep 13 00:07:30.918543 env[1219]: time="2025-09-13T00:07:30.918501277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bvtzc,Uid:d01a60f8-01d4-4d5c-acf6-3de948382429,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cb3804cccd2aa1335bfc4f608de7dcb09394b285fd5d52a63f466c975864e38\"" Sep 13 00:07:30.918674 kubelet[1928]: E0913 00:07:30.918646 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:30.921645 kubelet[1928]: E0913 00:07:30.921587 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:30.925348 env[1219]: time="2025-09-13T00:07:30.925018993Z" level=info msg="CreateContainer within sandbox \"bfa5b4d155320dea2f31667140bd055b980ea2055313b44f06bf62eae320dacc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:07:30.926723 env[1219]: time="2025-09-13T00:07:30.926691282Z" level=info msg="CreateContainer within sandbox \"6cb3804cccd2aa1335bfc4f608de7dcb09394b285fd5d52a63f466c975864e38\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:07:30.940623 env[1219]: time="2025-09-13T00:07:30.940572078Z" level=info msg="CreateContainer within sandbox \"bfa5b4d155320dea2f31667140bd055b980ea2055313b44f06bf62eae320dacc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c5a9475b9a35530a80c7cf81b5bb2ce0a0fbe6f95da3527c7e52e39168b0504\"" Sep 13 00:07:30.942472 env[1219]: time="2025-09-13T00:07:30.941069881Z" level=info msg="StartContainer for \"8c5a9475b9a35530a80c7cf81b5bb2ce0a0fbe6f95da3527c7e52e39168b0504\"" Sep 13 00:07:30.946059 env[1219]: time="2025-09-13T00:07:30.946003667Z" level=info msg="CreateContainer within sandbox \"6cb3804cccd2aa1335bfc4f608de7dcb09394b285fd5d52a63f466c975864e38\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1861e4f6b719ee8d8988ff9bd56eaeacc574b5c3deda37bdfca59b6349255ea6\"" Sep 13 00:07:30.947249 env[1219]: time="2025-09-13T00:07:30.947217834Z" level=info msg="StartContainer for \"1861e4f6b719ee8d8988ff9bd56eaeacc574b5c3deda37bdfca59b6349255ea6\"" Sep 13 00:07:30.957053 systemd[1]: Started cri-containerd-8c5a9475b9a35530a80c7cf81b5bb2ce0a0fbe6f95da3527c7e52e39168b0504.scope. Sep 13 00:07:30.963987 systemd[1]: Started cri-containerd-1861e4f6b719ee8d8988ff9bd56eaeacc574b5c3deda37bdfca59b6349255ea6.scope. Sep 13 00:07:31.006284 env[1219]: time="2025-09-13T00:07:31.006239034Z" level=info msg="StartContainer for \"8c5a9475b9a35530a80c7cf81b5bb2ce0a0fbe6f95da3527c7e52e39168b0504\" returns successfully" Sep 13 00:07:31.022278 env[1219]: time="2025-09-13T00:07:31.022217756Z" level=info msg="StartContainer for \"1861e4f6b719ee8d8988ff9bd56eaeacc574b5c3deda37bdfca59b6349255ea6\" returns successfully" Sep 13 00:07:31.520960 kubelet[1928]: E0913 00:07:31.520916 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:31.523314 kubelet[1928]: E0913 00:07:31.523287 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:31.534496 kubelet[1928]: I0913 00:07:31.534442 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8jxn7" podStartSLOduration=20.534427897 podStartE2EDuration="20.534427897s" podCreationTimestamp="2025-09-13 00:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:31.531843604 +0000 UTC m=+27.204026944" watchObservedRunningTime="2025-09-13 00:07:31.534427897 +0000 UTC m=+27.206611277" Sep 13 00:07:31.556121 kubelet[1928]: I0913 00:07:31.556044 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bvtzc" podStartSLOduration=20.556028008 podStartE2EDuration="20.556028008s" podCreationTimestamp="2025-09-13 00:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:31.545447553 +0000 UTC m=+27.217630933" watchObservedRunningTime="2025-09-13 00:07:31.556028008 +0000 UTC m=+27.228211388" Sep 13 00:07:32.524479 kubelet[1928]: E0913 00:07:32.524442 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:32.524922 kubelet[1928]: E0913 00:07:32.524903 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:33.526498 kubelet[1928]: E0913 00:07:33.526461 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:33.526972 kubelet[1928]: E0913 00:07:33.526951 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:07:37.000936 systemd[1]: Started sshd@5-10.0.0.24:22-10.0.0.1:51452.service. Sep 13 00:07:37.042343 sshd[3329]: Accepted publickey for core from 10.0.0.1 port 51452 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:07:37.043883 sshd[3329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:37.047691 systemd-logind[1207]: New session 6 of user core. Sep 13 00:07:37.048976 systemd[1]: Started session-6.scope. Sep 13 00:07:37.191747 sshd[3329]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:37.194121 systemd[1]: sshd@5-10.0.0.24:22-10.0.0.1:51452.service: Deactivated successfully. Sep 13 00:07:37.194855 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:07:37.195435 systemd-logind[1207]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:07:37.196063 systemd-logind[1207]: Removed session 6. Sep 13 00:07:42.199386 systemd[1]: Started sshd@6-10.0.0.24:22-10.0.0.1:53838.service. Sep 13 00:07:42.235075 sshd[3347]: Accepted publickey for core from 10.0.0.1 port 53838 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:07:42.236402 sshd[3347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:42.241062 systemd-logind[1207]: New session 7 of user core. Sep 13 00:07:42.244154 systemd[1]: Started session-7.scope. Sep 13 00:07:42.374963 sshd[3347]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:42.377654 systemd[1]: sshd@6-10.0.0.24:22-10.0.0.1:53838.service: Deactivated successfully. Sep 13 00:07:42.378397 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:07:42.379047 systemd-logind[1207]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:07:42.379761 systemd-logind[1207]: Removed session 7. Sep 13 00:07:47.382105 systemd[1]: Started sshd@7-10.0.0.24:22-10.0.0.1:53854.service. Sep 13 00:07:47.427465 sshd[3361]: Accepted publickey for core from 10.0.0.1 port 53854 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:07:47.428843 sshd[3361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:47.434070 systemd-logind[1207]: New session 8 of user core. Sep 13 00:07:47.437287 systemd[1]: Started session-8.scope. Sep 13 00:07:47.588802 sshd[3361]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:47.593255 systemd[1]: sshd@7-10.0.0.24:22-10.0.0.1:53854.service: Deactivated successfully. Sep 13 00:07:47.594008 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:07:47.594774 systemd-logind[1207]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:07:47.595663 systemd-logind[1207]: Removed session 8. Sep 13 00:07:52.595742 systemd[1]: Started sshd@8-10.0.0.24:22-10.0.0.1:37410.service. Sep 13 00:07:52.630838 sshd[3375]: Accepted publickey for core from 10.0.0.1 port 37410 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:07:52.632505 sshd[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:52.639076 systemd[1]: Started session-9.scope. Sep 13 00:07:52.639171 systemd-logind[1207]: New session 9 of user core. Sep 13 00:07:52.781851 sshd[3375]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:52.786873 systemd[1]: Started sshd@9-10.0.0.24:22-10.0.0.1:37420.service. Sep 13 00:07:52.787338 systemd[1]: sshd@8-10.0.0.24:22-10.0.0.1:37410.service: Deactivated successfully. Sep 13 00:07:52.789177 systemd-logind[1207]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:07:52.789267 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:07:52.790395 systemd-logind[1207]: Removed session 9. Sep 13 00:07:52.831963 sshd[3388]: Accepted publickey for core from 10.0.0.1 port 37420 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:07:52.833253 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:52.837759 systemd-logind[1207]: New session 10 of user core. Sep 13 00:07:52.838632 systemd[1]: Started session-10.scope. Sep 13 00:07:53.019666 sshd[3388]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:53.023554 systemd[1]: Started sshd@10-10.0.0.24:22-10.0.0.1:37426.service. Sep 13 00:07:53.031007 systemd[1]: sshd@9-10.0.0.24:22-10.0.0.1:37420.service: Deactivated successfully. Sep 13 00:07:53.032765 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:07:53.035421 systemd-logind[1207]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:07:53.038425 systemd-logind[1207]: Removed session 10. Sep 13 00:07:53.068416 sshd[3399]: Accepted publickey for core from 10.0.0.1 port 37426 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:07:53.069689 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:53.074850 systemd-logind[1207]: New session 11 of user core. Sep 13 00:07:53.075522 systemd[1]: Started session-11.scope. Sep 13 00:07:53.219892 sshd[3399]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:53.222604 systemd[1]: sshd@10-10.0.0.24:22-10.0.0.1:37426.service: Deactivated successfully. Sep 13 00:07:53.223342 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:07:53.224150 systemd-logind[1207]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:07:53.224954 systemd-logind[1207]: Removed session 11. Sep 13 00:07:58.225213 systemd[1]: Started sshd@11-10.0.0.24:22-10.0.0.1:37438.service. Sep 13 00:07:58.260485 sshd[3414]: Accepted publickey for core from 10.0.0.1 port 37438 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:07:58.261787 sshd[3414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:58.265470 systemd-logind[1207]: New session 12 of user core. Sep 13 00:07:58.266961 systemd[1]: Started session-12.scope. Sep 13 00:07:58.387065 sshd[3414]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:58.389185 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:07:58.389691 systemd-logind[1207]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:07:58.389834 systemd[1]: sshd@11-10.0.0.24:22-10.0.0.1:37438.service: Deactivated successfully. Sep 13 00:07:58.390966 systemd-logind[1207]: Removed session 12. Sep 13 00:08:03.392051 systemd[1]: Started sshd@12-10.0.0.24:22-10.0.0.1:60496.service. Sep 13 00:08:03.426533 sshd[3431]: Accepted publickey for core from 10.0.0.1 port 60496 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:08:03.428218 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:03.432798 systemd-logind[1207]: New session 13 of user core. Sep 13 00:08:03.433180 systemd[1]: Started session-13.scope. Sep 13 00:08:03.541478 sshd[3431]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:03.545540 systemd[1]: Started sshd@13-10.0.0.24:22-10.0.0.1:60498.service. Sep 13 00:08:03.546383 systemd[1]: sshd@12-10.0.0.24:22-10.0.0.1:60496.service: Deactivated successfully. Sep 13 00:08:03.547377 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:08:03.548159 systemd-logind[1207]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:08:03.549065 systemd-logind[1207]: Removed session 13. Sep 13 00:08:03.581099 sshd[3443]: Accepted publickey for core from 10.0.0.1 port 60498 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:08:03.582270 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:03.585686 systemd-logind[1207]: New session 14 of user core. Sep 13 00:08:03.586715 systemd[1]: Started session-14.scope. Sep 13 00:08:03.763932 sshd[3443]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:03.767195 systemd[1]: sshd@13-10.0.0.24:22-10.0.0.1:60498.service: Deactivated successfully. Sep 13 00:08:03.767859 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:08:03.768839 systemd-logind[1207]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:08:03.770136 systemd[1]: Started sshd@14-10.0.0.24:22-10.0.0.1:60500.service. Sep 13 00:08:03.770859 systemd-logind[1207]: Removed session 14. Sep 13 00:08:03.808252 sshd[3456]: Accepted publickey for core from 10.0.0.1 port 60500 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:08:03.809513 sshd[3456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:03.813720 systemd-logind[1207]: New session 15 of user core. Sep 13 00:08:03.814742 systemd[1]: Started session-15.scope. Sep 13 00:08:04.433449 sshd[3456]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:04.436543 systemd[1]: sshd@14-10.0.0.24:22-10.0.0.1:60500.service: Deactivated successfully. Sep 13 00:08:04.437619 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:08:04.438236 systemd-logind[1207]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:08:04.439471 systemd[1]: Started sshd@15-10.0.0.24:22-10.0.0.1:60514.service. Sep 13 00:08:04.443731 systemd-logind[1207]: Removed session 15. Sep 13 00:08:04.480501 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 60514 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:08:04.482032 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:04.486215 systemd[1]: Started session-16.scope. Sep 13 00:08:04.486331 systemd-logind[1207]: New session 16 of user core. Sep 13 00:08:04.714104 sshd[3475]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:04.723259 systemd[1]: Started sshd@16-10.0.0.24:22-10.0.0.1:60526.service. Sep 13 00:08:04.723696 systemd[1]: sshd@15-10.0.0.24:22-10.0.0.1:60514.service: Deactivated successfully. Sep 13 00:08:04.728693 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:08:04.729345 systemd-logind[1207]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:08:04.730218 systemd-logind[1207]: Removed session 16. Sep 13 00:08:04.765936 sshd[3489]: Accepted publickey for core from 10.0.0.1 port 60526 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:08:04.767385 sshd[3489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:04.773529 systemd-logind[1207]: New session 17 of user core. Sep 13 00:08:04.774698 systemd[1]: Started session-17.scope. Sep 13 00:08:04.901551 sshd[3489]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:04.904256 systemd[1]: sshd@16-10.0.0.24:22-10.0.0.1:60526.service: Deactivated successfully. Sep 13 00:08:04.905052 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:08:04.905599 systemd-logind[1207]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:08:04.906215 systemd-logind[1207]: Removed session 17. Sep 13 00:08:09.907345 systemd[1]: Started sshd@17-10.0.0.24:22-10.0.0.1:60088.service. Sep 13 00:08:09.941682 sshd[3505]: Accepted publickey for core from 10.0.0.1 port 60088 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:08:09.943288 sshd[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:09.946679 systemd-logind[1207]: New session 18 of user core. Sep 13 00:08:09.947594 systemd[1]: Started session-18.scope. Sep 13 00:08:10.059177 sshd[3505]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:10.062169 systemd[1]: sshd@17-10.0.0.24:22-10.0.0.1:60088.service: Deactivated successfully. Sep 13 00:08:10.062999 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:08:10.063492 systemd-logind[1207]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:08:10.064216 systemd-logind[1207]: Removed session 18. Sep 13 00:08:15.069414 systemd[1]: Started sshd@18-10.0.0.24:22-10.0.0.1:60098.service. Sep 13 00:08:15.111218 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 60098 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:08:15.112784 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:15.119934 systemd-logind[1207]: New session 19 of user core. Sep 13 00:08:15.120833 systemd[1]: Started session-19.scope. Sep 13 00:08:15.248858 sshd[3520]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:15.251429 systemd[1]: sshd@18-10.0.0.24:22-10.0.0.1:60098.service: Deactivated successfully. Sep 13 00:08:15.252273 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:08:15.252986 systemd-logind[1207]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:08:15.254035 systemd-logind[1207]: Removed session 19. Sep 13 00:08:15.445291 kubelet[1928]: E0913 00:08:15.445182 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:20.254902 systemd[1]: Started sshd@19-10.0.0.24:22-10.0.0.1:47568.service. Sep 13 00:08:20.293121 sshd[3533]: Accepted publickey for core from 10.0.0.1 port 47568 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:08:20.294470 sshd[3533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:20.298986 systemd-logind[1207]: New session 20 of user core. Sep 13 00:08:20.300091 systemd[1]: Started session-20.scope. Sep 13 00:08:20.414263 sshd[3533]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:20.418482 systemd[1]: Started sshd@20-10.0.0.24:22-10.0.0.1:47570.service. Sep 13 00:08:20.419609 systemd[1]: sshd@19-10.0.0.24:22-10.0.0.1:47568.service: Deactivated successfully. Sep 13 00:08:20.420684 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:08:20.421490 systemd-logind[1207]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:08:20.422369 systemd-logind[1207]: Removed session 20. Sep 13 00:08:20.455212 sshd[3545]: Accepted publickey for core from 10.0.0.1 port 47570 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:08:20.456504 sshd[3545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:20.459915 systemd-logind[1207]: New session 21 of user core. Sep 13 00:08:20.460949 systemd[1]: Started session-21.scope. Sep 13 00:08:22.468093 env[1219]: time="2025-09-13T00:08:22.468037611Z" level=info msg="StopContainer for \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\" with timeout 30 (s)" Sep 13 00:08:22.468494 env[1219]: time="2025-09-13T00:08:22.468401777Z" level=info msg="Stop container \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\" with signal terminated" Sep 13 00:08:22.486777 systemd[1]: cri-containerd-7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed.scope: Deactivated successfully. Sep 13 00:08:22.499199 env[1219]: time="2025-09-13T00:08:22.499096397Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:08:22.504671 env[1219]: time="2025-09-13T00:08:22.504634681Z" level=info msg="StopContainer for \"1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce\" with timeout 2 (s)" Sep 13 00:08:22.505039 env[1219]: time="2025-09-13T00:08:22.504938725Z" level=info msg="Stop container \"1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce\" with signal terminated" Sep 13 00:08:22.508279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed-rootfs.mount: Deactivated successfully. Sep 13 00:08:22.515606 systemd-networkd[1051]: lxc_health: Link DOWN Sep 13 00:08:22.515616 systemd-networkd[1051]: lxc_health: Lost carrier Sep 13 00:08:22.532731 env[1219]: time="2025-09-13T00:08:22.532682062Z" level=info msg="shim disconnected" id=7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed Sep 13 00:08:22.532731 env[1219]: time="2025-09-13T00:08:22.532728902Z" level=warning msg="cleaning up after shim disconnected" id=7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed namespace=k8s.io Sep 13 00:08:22.532731 env[1219]: time="2025-09-13T00:08:22.532737782Z" level=info msg="cleaning up dead shim" Sep 13 00:08:22.542049 env[1219]: time="2025-09-13T00:08:22.542007122Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3601 runtime=io.containerd.runc.v2\n" Sep 13 00:08:22.545673 env[1219]: time="2025-09-13T00:08:22.545343292Z" level=info msg="StopContainer for \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\" returns successfully" Sep 13 00:08:22.550044 env[1219]: time="2025-09-13T00:08:22.546328026Z" level=info msg="StopPodSandbox for \"e01b06a354989facf4c3ba4e5a7b7a0772fcbd72c1567b73be37bc412596a732\"" Sep 13 00:08:22.550044 env[1219]: time="2025-09-13T00:08:22.546389147Z" level=info msg="Container to stop \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:22.548170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e01b06a354989facf4c3ba4e5a7b7a0772fcbd72c1567b73be37bc412596a732-shm.mount: Deactivated successfully. Sep 13 00:08:22.553860 systemd[1]: cri-containerd-1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce.scope: Deactivated successfully. Sep 13 00:08:22.554225 systemd[1]: cri-containerd-1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce.scope: Consumed 6.310s CPU time. Sep 13 00:08:22.556733 systemd[1]: cri-containerd-e01b06a354989facf4c3ba4e5a7b7a0772fcbd72c1567b73be37bc412596a732.scope: Deactivated successfully. Sep 13 00:08:22.574092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce-rootfs.mount: Deactivated successfully. Sep 13 00:08:22.581205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e01b06a354989facf4c3ba4e5a7b7a0772fcbd72c1567b73be37bc412596a732-rootfs.mount: Deactivated successfully. Sep 13 00:08:22.582253 env[1219]: time="2025-09-13T00:08:22.582208045Z" level=info msg="shim disconnected" id=1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce Sep 13 00:08:22.582253 env[1219]: time="2025-09-13T00:08:22.582256726Z" level=warning msg="cleaning up after shim disconnected" id=1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce namespace=k8s.io Sep 13 00:08:22.582419 env[1219]: time="2025-09-13T00:08:22.582266286Z" level=info msg="cleaning up dead shim" Sep 13 00:08:22.582564 env[1219]: time="2025-09-13T00:08:22.582531890Z" level=info msg="shim disconnected" id=e01b06a354989facf4c3ba4e5a7b7a0772fcbd72c1567b73be37bc412596a732 Sep 13 00:08:22.582684 env[1219]: time="2025-09-13T00:08:22.582666732Z" level=warning msg="cleaning up after shim disconnected" id=e01b06a354989facf4c3ba4e5a7b7a0772fcbd72c1567b73be37bc412596a732 namespace=k8s.io Sep 13 00:08:22.582751 env[1219]: time="2025-09-13T00:08:22.582732093Z" level=info msg="cleaning up dead shim" Sep 13 00:08:22.588962 env[1219]: time="2025-09-13T00:08:22.588901186Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3648 runtime=io.containerd.runc.v2\n" Sep 13 00:08:22.590860 env[1219]: time="2025-09-13T00:08:22.590816054Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3649 runtime=io.containerd.runc.v2\n" Sep 13 00:08:22.591279 env[1219]: time="2025-09-13T00:08:22.591240781Z" level=info msg="TearDown network for sandbox \"e01b06a354989facf4c3ba4e5a7b7a0772fcbd72c1567b73be37bc412596a732\" successfully" Sep 13 00:08:22.591381 env[1219]: time="2025-09-13T00:08:22.591363902Z" level=info msg="StopPodSandbox for \"e01b06a354989facf4c3ba4e5a7b7a0772fcbd72c1567b73be37bc412596a732\" returns successfully" Sep 13 00:08:22.595944 env[1219]: time="2025-09-13T00:08:22.595740808Z" level=info msg="StopContainer for \"1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce\" returns successfully" Sep 13 00:08:22.596201 env[1219]: time="2025-09-13T00:08:22.596174255Z" level=info msg="StopPodSandbox for \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\"" Sep 13 00:08:22.596266 env[1219]: time="2025-09-13T00:08:22.596234096Z" level=info msg="Container to stop \"22d13c0c277efb5c849382ad39b1ef85d34d337817394ed9e2b66ae8047e4a52\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:22.596266 env[1219]: time="2025-09-13T00:08:22.596250656Z" level=info msg="Container to stop \"1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:22.596266 env[1219]: time="2025-09-13T00:08:22.596262816Z" level=info msg="Container to stop \"c232b5968c5a2215869c81ab10dd50e267e1e5edaf6de6470eedcf48148c5c8d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:22.596350 env[1219]: time="2025-09-13T00:08:22.596273776Z" level=info msg="Container to stop \"8d8a58de2927a564331b2007c52ee4064850a30701c53456a0655e1fcc275798\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:22.596350 env[1219]: time="2025-09-13T00:08:22.596284976Z" level=info msg="Container to stop \"4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:22.603074 systemd[1]: cri-containerd-7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f.scope: Deactivated successfully. Sep 13 00:08:22.630674 kubelet[1928]: I0913 00:08:22.630534 1928 scope.go:117] "RemoveContainer" containerID="7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed" Sep 13 00:08:22.632437 env[1219]: time="2025-09-13T00:08:22.632400118Z" level=info msg="RemoveContainer for \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\"" Sep 13 00:08:22.642859 kubelet[1928]: I0913 00:08:22.642803 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4695fdbe-8eba-4e3c-864b-932851ceb7e2-cilium-config-path\") pod \"4695fdbe-8eba-4e3c-864b-932851ceb7e2\" (UID: \"4695fdbe-8eba-4e3c-864b-932851ceb7e2\") " Sep 13 00:08:22.642972 kubelet[1928]: I0913 00:08:22.642873 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h48hh\" (UniqueName: \"kubernetes.io/projected/4695fdbe-8eba-4e3c-864b-932851ceb7e2-kube-api-access-h48hh\") pod \"4695fdbe-8eba-4e3c-864b-932851ceb7e2\" (UID: \"4695fdbe-8eba-4e3c-864b-932851ceb7e2\") " Sep 13 00:08:22.646185 kubelet[1928]: I0913 00:08:22.646064 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4695fdbe-8eba-4e3c-864b-932851ceb7e2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4695fdbe-8eba-4e3c-864b-932851ceb7e2" (UID: "4695fdbe-8eba-4e3c-864b-932851ceb7e2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:08:22.647372 kubelet[1928]: I0913 00:08:22.647343 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4695fdbe-8eba-4e3c-864b-932851ceb7e2-kube-api-access-h48hh" (OuterVolumeSpecName: "kube-api-access-h48hh") pod "4695fdbe-8eba-4e3c-864b-932851ceb7e2" (UID: "4695fdbe-8eba-4e3c-864b-932851ceb7e2"). InnerVolumeSpecName "kube-api-access-h48hh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:08:22.650249 env[1219]: time="2025-09-13T00:08:22.650198626Z" level=info msg="shim disconnected" id=7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f Sep 13 00:08:22.650329 env[1219]: time="2025-09-13T00:08:22.650254627Z" level=warning msg="cleaning up after shim disconnected" id=7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f namespace=k8s.io Sep 13 00:08:22.650329 env[1219]: time="2025-09-13T00:08:22.650266107Z" level=info msg="cleaning up dead shim" Sep 13 00:08:22.655002 env[1219]: time="2025-09-13T00:08:22.654940177Z" level=info msg="RemoveContainer for \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\" returns successfully" Sep 13 00:08:22.655302 kubelet[1928]: I0913 00:08:22.655273 1928 scope.go:117] "RemoveContainer" containerID="7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed" Sep 13 00:08:22.655616 env[1219]: time="2025-09-13T00:08:22.655539826Z" level=error msg="ContainerStatus for \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\": not found" Sep 13 00:08:22.655774 kubelet[1928]: E0913 00:08:22.655745 1928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\": not found" containerID="7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed" Sep 13 00:08:22.655849 kubelet[1928]: I0913 00:08:22.655781 1928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed"} err="failed to get container status \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"7863e4f1d9de93e7f0edf51c866aa38ecdba223c07c7cd04a696a124b71b34ed\": not found" Sep 13 00:08:22.661087 env[1219]: time="2025-09-13T00:08:22.661037308Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3692 runtime=io.containerd.runc.v2\n" Sep 13 00:08:22.661414 env[1219]: time="2025-09-13T00:08:22.661375913Z" level=info msg="TearDown network for sandbox \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\" successfully" Sep 13 00:08:22.661460 env[1219]: time="2025-09-13T00:08:22.661403194Z" level=info msg="StopPodSandbox for \"7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f\" returns successfully" Sep 13 00:08:22.744106 kubelet[1928]: I0913 00:08:22.743974 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-cilium-cgroup\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.744106 kubelet[1928]: I0913 00:08:22.744020 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-lib-modules\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.744106 kubelet[1928]: I0913 00:08:22.744051 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1982f8e-190f-4964-8349-227a7b0fc2e6-clustermesh-secrets\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.744106 kubelet[1928]: I0913 00:08:22.744069 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-bpf-maps\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.744106 kubelet[1928]: I0913 00:08:22.744086 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-hostproc\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.744347 kubelet[1928]: I0913 00:08:22.744097 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.744505 kubelet[1928]: I0913 00:08:22.744429 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.744505 kubelet[1928]: I0913 00:08:22.744487 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.744643 kubelet[1928]: I0913 00:08:22.744613 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-hostproc" (OuterVolumeSpecName: "hostproc") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.745579 kubelet[1928]: I0913 00:08:22.745557 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-xtables-lock\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.745691 kubelet[1928]: I0913 00:08:22.745667 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.745787 kubelet[1928]: I0913 00:08:22.745763 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-cni-path" (OuterVolumeSpecName: "cni-path") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.745997 kubelet[1928]: I0913 00:08:22.745974 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-cni-path\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.746040 kubelet[1928]: I0913 00:08:22.746021 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-host-proc-sys-net\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.746071 kubelet[1928]: I0913 00:08:22.746044 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1982f8e-190f-4964-8349-227a7b0fc2e6-cilium-config-path\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.746660 kubelet[1928]: I0913 00:08:22.746100 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1982f8e-190f-4964-8349-227a7b0fc2e6-hubble-tls\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.746660 kubelet[1928]: I0913 00:08:22.746132 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-etc-cni-netd\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.746660 kubelet[1928]: I0913 00:08:22.746137 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.746660 kubelet[1928]: I0913 00:08:22.746151 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-cilium-run\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.746660 kubelet[1928]: I0913 00:08:22.746166 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-host-proc-sys-kernel\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.746885 kubelet[1928]: I0913 00:08:22.746197 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.746885 kubelet[1928]: I0913 00:08:22.746200 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.746885 kubelet[1928]: I0913 00:08:22.746220 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:22.746885 kubelet[1928]: I0913 00:08:22.746242 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk2td\" (UniqueName: \"kubernetes.io/projected/c1982f8e-190f-4964-8349-227a7b0fc2e6-kube-api-access-sk2td\") pod \"c1982f8e-190f-4964-8349-227a7b0fc2e6\" (UID: \"c1982f8e-190f-4964-8349-227a7b0fc2e6\") " Sep 13 00:08:22.746885 kubelet[1928]: I0913 00:08:22.746286 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.746885 kubelet[1928]: I0913 00:08:22.746296 1928 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.747018 kubelet[1928]: I0913 00:08:22.746305 1928 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.747018 kubelet[1928]: I0913 00:08:22.746314 1928 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.747018 kubelet[1928]: I0913 00:08:22.746322 1928 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.747018 kubelet[1928]: I0913 00:08:22.746331 1928 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.747018 kubelet[1928]: I0913 00:08:22.746339 1928 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.747018 kubelet[1928]: I0913 00:08:22.746348 1928 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.747018 kubelet[1928]: I0913 00:08:22.746355 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.747018 kubelet[1928]: I0913 00:08:22.746363 1928 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1982f8e-190f-4964-8349-227a7b0fc2e6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.747234 kubelet[1928]: I0913 00:08:22.746371 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4695fdbe-8eba-4e3c-864b-932851ceb7e2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.747234 kubelet[1928]: I0913 00:08:22.746378 1928 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h48hh\" (UniqueName: \"kubernetes.io/projected/4695fdbe-8eba-4e3c-864b-932851ceb7e2-kube-api-access-h48hh\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.748094 kubelet[1928]: I0913 00:08:22.748059 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1982f8e-190f-4964-8349-227a7b0fc2e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:08:22.748904 kubelet[1928]: I0913 00:08:22.748876 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1982f8e-190f-4964-8349-227a7b0fc2e6-kube-api-access-sk2td" (OuterVolumeSpecName: "kube-api-access-sk2td") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "kube-api-access-sk2td". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:08:22.749138 kubelet[1928]: I0913 00:08:22.749059 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1982f8e-190f-4964-8349-227a7b0fc2e6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:08:22.749242 kubelet[1928]: I0913 00:08:22.749175 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1982f8e-190f-4964-8349-227a7b0fc2e6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c1982f8e-190f-4964-8349-227a7b0fc2e6" (UID: "c1982f8e-190f-4964-8349-227a7b0fc2e6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:08:22.846906 kubelet[1928]: I0913 00:08:22.846847 1928 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1982f8e-190f-4964-8349-227a7b0fc2e6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.846906 kubelet[1928]: I0913 00:08:22.846884 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1982f8e-190f-4964-8349-227a7b0fc2e6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.846906 kubelet[1928]: I0913 00:08:22.846893 1928 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1982f8e-190f-4964-8349-227a7b0fc2e6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.846906 kubelet[1928]: I0913 00:08:22.846901 1928 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sk2td\" (UniqueName: \"kubernetes.io/projected/c1982f8e-190f-4964-8349-227a7b0fc2e6-kube-api-access-sk2td\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:22.934719 systemd[1]: Removed slice kubepods-besteffort-pod4695fdbe_8eba_4e3c_864b_932851ceb7e2.slice. Sep 13 00:08:23.479555 systemd[1]: var-lib-kubelet-pods-4695fdbe\x2d8eba\x2d4e3c\x2d864b\x2d932851ceb7e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh48hh.mount: Deactivated successfully. Sep 13 00:08:23.480270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f-rootfs.mount: Deactivated successfully. Sep 13 00:08:23.480357 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7852e4ccd0f0908255f0d3b6553362ae4baeff15a1f813fdeef96341e7ed9c8f-shm.mount: Deactivated successfully. Sep 13 00:08:23.480415 systemd[1]: var-lib-kubelet-pods-c1982f8e\x2d190f\x2d4964\x2d8349\x2d227a7b0fc2e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsk2td.mount: Deactivated successfully. Sep 13 00:08:23.480470 systemd[1]: var-lib-kubelet-pods-c1982f8e\x2d190f\x2d4964\x2d8349\x2d227a7b0fc2e6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:08:23.480527 systemd[1]: var-lib-kubelet-pods-c1982f8e\x2d190f\x2d4964\x2d8349\x2d227a7b0fc2e6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:08:23.639833 kubelet[1928]: I0913 00:08:23.639796 1928 scope.go:117] "RemoveContainer" containerID="1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce" Sep 13 00:08:23.642457 env[1219]: time="2025-09-13T00:08:23.642400706Z" level=info msg="RemoveContainer for \"1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce\"" Sep 13 00:08:23.644526 systemd[1]: Removed slice kubepods-burstable-podc1982f8e_190f_4964_8349_227a7b0fc2e6.slice. Sep 13 00:08:23.644653 systemd[1]: kubepods-burstable-podc1982f8e_190f_4964_8349_227a7b0fc2e6.slice: Consumed 6.438s CPU time. Sep 13 00:08:23.647305 env[1219]: time="2025-09-13T00:08:23.647267457Z" level=info msg="RemoveContainer for \"1b766115b1a60333d233c1f90a5f671ff4c8a0d8158a530b200f01cce80312ce\" returns successfully" Sep 13 00:08:23.648257 kubelet[1928]: I0913 00:08:23.648226 1928 scope.go:117] "RemoveContainer" containerID="4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19" Sep 13 00:08:23.651358 env[1219]: time="2025-09-13T00:08:23.651016632Z" level=info msg="RemoveContainer for \"4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19\"" Sep 13 00:08:23.655441 env[1219]: time="2025-09-13T00:08:23.655394575Z" level=info msg="RemoveContainer for \"4201db37ab3b548ac6f6a482bbbf09a223f8fd78a53a8453ad973751c8458f19\" returns successfully" Sep 13 00:08:23.655994 kubelet[1928]: I0913 00:08:23.655959 1928 scope.go:117] "RemoveContainer" containerID="8d8a58de2927a564331b2007c52ee4064850a30701c53456a0655e1fcc275798" Sep 13 00:08:23.657153 env[1219]: time="2025-09-13T00:08:23.657107441Z" level=info msg="RemoveContainer for \"8d8a58de2927a564331b2007c52ee4064850a30701c53456a0655e1fcc275798\"" Sep 13 00:08:23.661262 env[1219]: time="2025-09-13T00:08:23.661200340Z" level=info msg="RemoveContainer for \"8d8a58de2927a564331b2007c52ee4064850a30701c53456a0655e1fcc275798\" returns successfully" Sep 13 00:08:23.661530 kubelet[1928]: I0913 00:08:23.661512 1928 scope.go:117] "RemoveContainer" containerID="c232b5968c5a2215869c81ab10dd50e267e1e5edaf6de6470eedcf48148c5c8d" Sep 13 00:08:23.663318 env[1219]: time="2025-09-13T00:08:23.663282211Z" level=info msg="RemoveContainer for \"c232b5968c5a2215869c81ab10dd50e267e1e5edaf6de6470eedcf48148c5c8d\"" Sep 13 00:08:23.666681 env[1219]: time="2025-09-13T00:08:23.666643460Z" level=info msg="RemoveContainer for \"c232b5968c5a2215869c81ab10dd50e267e1e5edaf6de6470eedcf48148c5c8d\" returns successfully" Sep 13 00:08:23.669762 kubelet[1928]: I0913 00:08:23.666864 1928 scope.go:117] "RemoveContainer" containerID="22d13c0c277efb5c849382ad39b1ef85d34d337817394ed9e2b66ae8047e4a52" Sep 13 00:08:23.670873 env[1219]: time="2025-09-13T00:08:23.670824001Z" level=info msg="RemoveContainer for \"22d13c0c277efb5c849382ad39b1ef85d34d337817394ed9e2b66ae8047e4a52\"" Sep 13 00:08:23.673277 env[1219]: time="2025-09-13T00:08:23.673233836Z" level=info msg="RemoveContainer for \"22d13c0c277efb5c849382ad39b1ef85d34d337817394ed9e2b66ae8047e4a52\" returns successfully" Sep 13 00:08:24.431163 sshd[3545]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:24.434942 systemd[1]: Started sshd@21-10.0.0.24:22-10.0.0.1:47582.service. Sep 13 00:08:24.436287 systemd-logind[1207]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:08:24.436477 systemd[1]: sshd@20-10.0.0.24:22-10.0.0.1:47570.service: Deactivated successfully. Sep 13 00:08:24.437094 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:08:24.437267 systemd[1]: session-21.scope: Consumed 1.326s CPU time. Sep 13 00:08:24.439616 systemd-logind[1207]: Removed session 21. Sep 13 00:08:24.453372 kubelet[1928]: I0913 00:08:24.453322 1928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4695fdbe-8eba-4e3c-864b-932851ceb7e2" path="/var/lib/kubelet/pods/4695fdbe-8eba-4e3c-864b-932851ceb7e2/volumes" Sep 13 00:08:24.453737 kubelet[1928]: I0913 00:08:24.453710 1928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1982f8e-190f-4964-8349-227a7b0fc2e6" path="/var/lib/kubelet/pods/c1982f8e-190f-4964-8349-227a7b0fc2e6/volumes" Sep 13 00:08:24.484459 kubelet[1928]: E0913 00:08:24.484336 1928 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:08:24.491070 sshd[3712]: Accepted publickey for core from 10.0.0.1 port 47582 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:08:24.492794 sshd[3712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:24.498279 systemd-logind[1207]: New session 22 of user core. Sep 13 00:08:24.498742 systemd[1]: Started session-22.scope. Sep 13 00:08:25.464598 sshd[3712]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:25.472742 systemd[1]: Started sshd@22-10.0.0.24:22-10.0.0.1:47588.service. Sep 13 00:08:25.473473 systemd[1]: sshd@21-10.0.0.24:22-10.0.0.1:47582.service: Deactivated successfully. Sep 13 00:08:25.474425 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:08:25.478419 systemd-logind[1207]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:08:25.479968 systemd-logind[1207]: Removed session 22. Sep 13 00:08:25.495688 systemd[1]: Created slice kubepods-burstable-poda2015d27_a64b_4e9c_b25b_f57f9979e567.slice. Sep 13 00:08:25.518417 sshd[3724]: Accepted publickey for core from 10.0.0.1 port 47588 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:08:25.519817 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:25.523218 systemd-logind[1207]: New session 23 of user core. Sep 13 00:08:25.524419 systemd[1]: Started session-23.scope. Sep 13 00:08:25.562272 kubelet[1928]: I0913 00:08:25.562155 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-hostproc\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.562272 kubelet[1928]: I0913 00:08:25.562203 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-etc-cni-netd\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.562272 kubelet[1928]: I0913 00:08:25.562221 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-ipsec-secrets\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.562272 kubelet[1928]: I0913 00:08:25.562239 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-lib-modules\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.562272 kubelet[1928]: I0913 00:08:25.562254 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-xtables-lock\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.562272 kubelet[1928]: I0913 00:08:25.562272 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-run\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.563095 kubelet[1928]: I0913 00:08:25.562290 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-cgroup\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.563095 kubelet[1928]: I0913 00:08:25.562304 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-cni-path\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.563095 kubelet[1928]: I0913 00:08:25.562322 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-bpf-maps\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.563095 kubelet[1928]: I0913 00:08:25.562347 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2015d27-a64b-4e9c-b25b-f57f9979e567-clustermesh-secrets\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.563095 kubelet[1928]: I0913 00:08:25.562363 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-config-path\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.563095 kubelet[1928]: I0913 00:08:25.562381 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2015d27-a64b-4e9c-b25b-f57f9979e567-hubble-tls\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.563545 kubelet[1928]: I0913 00:08:25.562395 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-host-proc-sys-net\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.563545 kubelet[1928]: I0913 00:08:25.562411 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-host-proc-sys-kernel\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.563545 kubelet[1928]: I0913 00:08:25.562425 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rjs6\" (UniqueName: \"kubernetes.io/projected/a2015d27-a64b-4e9c-b25b-f57f9979e567-kube-api-access-9rjs6\") pod \"cilium-npvf5\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " pod="kube-system/cilium-npvf5" Sep 13 00:08:25.649228 sshd[3724]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:25.653606 systemd[1]: Started sshd@23-10.0.0.24:22-10.0.0.1:47604.service. Sep 13 00:08:25.654404 systemd[1]: sshd@22-10.0.0.24:22-10.0.0.1:47588.service: Deactivated successfully. Sep 13 00:08:25.655634 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:08:25.657061 systemd-logind[1207]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:08:25.658089 systemd-logind[1207]: Removed session 23. Sep 13 00:08:25.659970 kubelet[1928]: E0913 00:08:25.659927 1928 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-9rjs6 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-npvf5" podUID="a2015d27-a64b-4e9c-b25b-f57f9979e567" Sep 13 00:08:25.705140 sshd[3737]: Accepted publickey for core from 10.0.0.1 port 47604 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:08:25.706093 sshd[3737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:25.709719 systemd-logind[1207]: New session 24 of user core. Sep 13 00:08:25.710824 systemd[1]: Started session-24.scope. Sep 13 00:08:26.414798 kubelet[1928]: I0913 00:08:26.414734 1928 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:08:26Z","lastTransitionTime":"2025-09-13T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:08:26.769730 kubelet[1928]: I0913 00:08:26.769613 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rjs6\" (UniqueName: \"kubernetes.io/projected/a2015d27-a64b-4e9c-b25b-f57f9979e567-kube-api-access-9rjs6\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.769730 kubelet[1928]: I0913 00:08:26.769662 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2015d27-a64b-4e9c-b25b-f57f9979e567-hubble-tls\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.769730 kubelet[1928]: I0913 00:08:26.769683 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-hostproc\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.769730 kubelet[1928]: I0913 00:08:26.769699 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-xtables-lock\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.769730 kubelet[1928]: I0913 00:08:26.769715 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2015d27-a64b-4e9c-b25b-f57f9979e567-clustermesh-secrets\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.769730 kubelet[1928]: I0913 00:08:26.769731 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-cni-path\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.770240 kubelet[1928]: I0913 00:08:26.769749 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-config-path\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.770240 kubelet[1928]: I0913 00:08:26.769763 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-host-proc-sys-kernel\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.770240 kubelet[1928]: I0913 00:08:26.769780 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-ipsec-secrets\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.770240 kubelet[1928]: I0913 00:08:26.769807 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-lib-modules\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.770240 kubelet[1928]: I0913 00:08:26.769824 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-etc-cni-netd\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.770240 kubelet[1928]: I0913 00:08:26.769826 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:26.770378 kubelet[1928]: I0913 00:08:26.769837 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-cgroup\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.770378 kubelet[1928]: I0913 00:08:26.769871 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:26.770378 kubelet[1928]: I0913 00:08:26.769903 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-cni-path" (OuterVolumeSpecName: "cni-path") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:26.770378 kubelet[1928]: I0913 00:08:26.769903 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-run\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.770378 kubelet[1928]: I0913 00:08:26.769919 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:26.770486 kubelet[1928]: I0913 00:08:26.769926 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-bpf-maps\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.770486 kubelet[1928]: I0913 00:08:26.769940 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-hostproc" (OuterVolumeSpecName: "hostproc") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:26.770486 kubelet[1928]: I0913 00:08:26.769946 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-host-proc-sys-net\") pod \"a2015d27-a64b-4e9c-b25b-f57f9979e567\" (UID: \"a2015d27-a64b-4e9c-b25b-f57f9979e567\") " Sep 13 00:08:26.770486 kubelet[1928]: I0913 00:08:26.769980 1928 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.770486 kubelet[1928]: I0913 00:08:26.769990 1928 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.770486 kubelet[1928]: I0913 00:08:26.770000 1928 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.770486 kubelet[1928]: I0913 00:08:26.770008 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.770632 kubelet[1928]: I0913 00:08:26.770028 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.770632 kubelet[1928]: I0913 00:08:26.770043 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:26.770632 kubelet[1928]: I0913 00:08:26.770057 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:26.770632 kubelet[1928]: I0913 00:08:26.770071 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:26.770632 kubelet[1928]: I0913 00:08:26.770084 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:26.771791 kubelet[1928]: I0913 00:08:26.771742 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:08:26.774201 systemd[1]: var-lib-kubelet-pods-a2015d27\x2da64b\x2d4e9c\x2db25b\x2df57f9979e567-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:08:26.775540 kubelet[1928]: I0913 00:08:26.774655 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2015d27-a64b-4e9c-b25b-f57f9979e567-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:08:26.775540 kubelet[1928]: I0913 00:08:26.774707 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:08:26.777024 kubelet[1928]: I0913 00:08:26.775892 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2015d27-a64b-4e9c-b25b-f57f9979e567-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:08:26.776831 systemd[1]: var-lib-kubelet-pods-a2015d27\x2da64b\x2d4e9c\x2db25b\x2df57f9979e567-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:08:26.777394 kubelet[1928]: I0913 00:08:26.777368 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:08:26.779252 kubelet[1928]: I0913 00:08:26.779219 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2015d27-a64b-4e9c-b25b-f57f9979e567-kube-api-access-9rjs6" (OuterVolumeSpecName: "kube-api-access-9rjs6") pod "a2015d27-a64b-4e9c-b25b-f57f9979e567" (UID: "a2015d27-a64b-4e9c-b25b-f57f9979e567"). InnerVolumeSpecName "kube-api-access-9rjs6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:08:26.780325 systemd[1]: var-lib-kubelet-pods-a2015d27\x2da64b\x2d4e9c\x2db25b\x2df57f9979e567-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9rjs6.mount: Deactivated successfully. Sep 13 00:08:26.780420 systemd[1]: var-lib-kubelet-pods-a2015d27\x2da64b\x2d4e9c\x2db25b\x2df57f9979e567-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:08:26.871028 kubelet[1928]: I0913 00:08:26.870984 1928 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2015d27-a64b-4e9c-b25b-f57f9979e567-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.871028 kubelet[1928]: I0913 00:08:26.871014 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.871028 kubelet[1928]: I0913 00:08:26.871027 1928 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.871028 kubelet[1928]: I0913 00:08:26.871036 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a2015d27-a64b-4e9c-b25b-f57f9979e567-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.871302 kubelet[1928]: I0913 00:08:26.871045 1928 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.871302 kubelet[1928]: I0913 00:08:26.871053 1928 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.871302 kubelet[1928]: I0913 00:08:26.871061 1928 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.871302 kubelet[1928]: I0913 00:08:26.871069 1928 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2015d27-a64b-4e9c-b25b-f57f9979e567-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.871302 kubelet[1928]: I0913 00:08:26.871077 1928 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9rjs6\" (UniqueName: \"kubernetes.io/projected/a2015d27-a64b-4e9c-b25b-f57f9979e567-kube-api-access-9rjs6\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:26.871302 kubelet[1928]: I0913 00:08:26.871085 1928 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2015d27-a64b-4e9c-b25b-f57f9979e567-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:27.653674 systemd[1]: Removed slice kubepods-burstable-poda2015d27_a64b_4e9c_b25b_f57f9979e567.slice. Sep 13 00:08:27.697161 systemd[1]: Created slice kubepods-burstable-poddfdaab2b_e36c_4d7c_87ba_6847d9eaba4d.slice. Sep 13 00:08:27.776447 kubelet[1928]: I0913 00:08:27.776393 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-cilium-run\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.776447 kubelet[1928]: I0913 00:08:27.776441 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-cni-path\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.776778 kubelet[1928]: I0913 00:08:27.776470 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-clustermesh-secrets\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.776778 kubelet[1928]: I0913 00:08:27.776490 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-cilium-ipsec-secrets\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.776778 kubelet[1928]: I0913 00:08:27.776509 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-bpf-maps\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.776778 kubelet[1928]: I0913 00:08:27.776524 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-host-proc-sys-net\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.776778 kubelet[1928]: I0913 00:08:27.776546 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-hubble-tls\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.776934 kubelet[1928]: I0913 00:08:27.776562 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htzsl\" (UniqueName: \"kubernetes.io/projected/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-kube-api-access-htzsl\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.776934 kubelet[1928]: I0913 00:08:27.776580 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-lib-modules\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.776934 kubelet[1928]: I0913 00:08:27.776596 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-xtables-lock\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.776934 kubelet[1928]: I0913 00:08:27.776619 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-cilium-cgroup\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.776934 kubelet[1928]: I0913 00:08:27.776635 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-etc-cni-netd\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.776934 kubelet[1928]: I0913 00:08:27.776650 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-host-proc-sys-kernel\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.777063 kubelet[1928]: I0913 00:08:27.776665 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-hostproc\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:27.777063 kubelet[1928]: I0913 00:08:27.776689 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d-cilium-config-path\") pod \"cilium-wfltr\" (UID: \"dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d\") " pod="kube-system/cilium-wfltr" Sep 13 00:08:28.000157 kubelet[1928]: E0913 00:08:28.000024 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:28.000959 env[1219]: time="2025-09-13T00:08:28.000826799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wfltr,Uid:dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d,Namespace:kube-system,Attempt:0,}" Sep 13 00:08:28.016208 env[1219]: time="2025-09-13T00:08:28.016134396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:28.016341 env[1219]: time="2025-09-13T00:08:28.016179316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:28.016341 env[1219]: time="2025-09-13T00:08:28.016190196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:28.016341 env[1219]: time="2025-09-13T00:08:28.016324158Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b04de65631d8045914932b063c9ce4e352faee2f72f4426531f16d96c07c6bee pid=3769 runtime=io.containerd.runc.v2 Sep 13 00:08:28.030831 systemd[1]: Started cri-containerd-b04de65631d8045914932b063c9ce4e352faee2f72f4426531f16d96c07c6bee.scope. Sep 13 00:08:28.056053 env[1219]: time="2025-09-13T00:08:28.056012426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wfltr,Uid:dfdaab2b-e36c-4d7c-87ba-6847d9eaba4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b04de65631d8045914932b063c9ce4e352faee2f72f4426531f16d96c07c6bee\"" Sep 13 00:08:28.056738 kubelet[1928]: E0913 00:08:28.056716 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:28.061931 env[1219]: time="2025-09-13T00:08:28.061893702Z" level=info msg="CreateContainer within sandbox \"b04de65631d8045914932b063c9ce4e352faee2f72f4426531f16d96c07c6bee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:08:28.074756 env[1219]: time="2025-09-13T00:08:28.074702266Z" level=info msg="CreateContainer within sandbox \"b04de65631d8045914932b063c9ce4e352faee2f72f4426531f16d96c07c6bee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9b13881920cae4d447b5f8647693b168da4cd3711824e45d54e9ccee4fcbb8b\"" Sep 13 00:08:28.075333 env[1219]: time="2025-09-13T00:08:28.075304354Z" level=info msg="StartContainer for \"d9b13881920cae4d447b5f8647693b168da4cd3711824e45d54e9ccee4fcbb8b\"" Sep 13 00:08:28.091741 systemd[1]: Started cri-containerd-d9b13881920cae4d447b5f8647693b168da4cd3711824e45d54e9ccee4fcbb8b.scope. Sep 13 00:08:28.124153 env[1219]: time="2025-09-13T00:08:28.122586599Z" level=info msg="StartContainer for \"d9b13881920cae4d447b5f8647693b168da4cd3711824e45d54e9ccee4fcbb8b\" returns successfully" Sep 13 00:08:28.133868 systemd[1]: cri-containerd-d9b13881920cae4d447b5f8647693b168da4cd3711824e45d54e9ccee4fcbb8b.scope: Deactivated successfully. Sep 13 00:08:28.160766 env[1219]: time="2025-09-13T00:08:28.160704328Z" level=info msg="shim disconnected" id=d9b13881920cae4d447b5f8647693b168da4cd3711824e45d54e9ccee4fcbb8b Sep 13 00:08:28.160766 env[1219]: time="2025-09-13T00:08:28.160757088Z" level=warning msg="cleaning up after shim disconnected" id=d9b13881920cae4d447b5f8647693b168da4cd3711824e45d54e9ccee4fcbb8b namespace=k8s.io Sep 13 00:08:28.160766 env[1219]: time="2025-09-13T00:08:28.160767488Z" level=info msg="cleaning up dead shim" Sep 13 00:08:28.167825 env[1219]: time="2025-09-13T00:08:28.167785738Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3853 runtime=io.containerd.runc.v2\n" Sep 13 00:08:28.447071 kubelet[1928]: I0913 00:08:28.447014 1928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2015d27-a64b-4e9c-b25b-f57f9979e567" path="/var/lib/kubelet/pods/a2015d27-a64b-4e9c-b25b-f57f9979e567/volumes" Sep 13 00:08:28.654777 kubelet[1928]: E0913 00:08:28.654504 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:28.664152 env[1219]: time="2025-09-13T00:08:28.662893841Z" level=info msg="CreateContainer within sandbox \"b04de65631d8045914932b063c9ce4e352faee2f72f4426531f16d96c07c6bee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:08:28.674886 env[1219]: time="2025-09-13T00:08:28.674829794Z" level=info msg="CreateContainer within sandbox \"b04de65631d8045914932b063c9ce4e352faee2f72f4426531f16d96c07c6bee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"be7ff7c7753e2f6eb2f2395b0e372850baf4942a1c450bd8c2cc8e7fb0580c6c\"" Sep 13 00:08:28.676341 env[1219]: time="2025-09-13T00:08:28.676299333Z" level=info msg="StartContainer for \"be7ff7c7753e2f6eb2f2395b0e372850baf4942a1c450bd8c2cc8e7fb0580c6c\"" Sep 13 00:08:28.690150 systemd[1]: Started cri-containerd-be7ff7c7753e2f6eb2f2395b0e372850baf4942a1c450bd8c2cc8e7fb0580c6c.scope. Sep 13 00:08:28.722622 env[1219]: time="2025-09-13T00:08:28.722509365Z" level=info msg="StartContainer for \"be7ff7c7753e2f6eb2f2395b0e372850baf4942a1c450bd8c2cc8e7fb0580c6c\" returns successfully" Sep 13 00:08:28.728307 systemd[1]: cri-containerd-be7ff7c7753e2f6eb2f2395b0e372850baf4942a1c450bd8c2cc8e7fb0580c6c.scope: Deactivated successfully. Sep 13 00:08:28.751273 env[1219]: time="2025-09-13T00:08:28.751227292Z" level=info msg="shim disconnected" id=be7ff7c7753e2f6eb2f2395b0e372850baf4942a1c450bd8c2cc8e7fb0580c6c Sep 13 00:08:28.751273 env[1219]: time="2025-09-13T00:08:28.751272893Z" level=warning msg="cleaning up after shim disconnected" id=be7ff7c7753e2f6eb2f2395b0e372850baf4942a1c450bd8c2cc8e7fb0580c6c namespace=k8s.io Sep 13 00:08:28.751486 env[1219]: time="2025-09-13T00:08:28.751282973Z" level=info msg="cleaning up dead shim" Sep 13 00:08:28.758252 env[1219]: time="2025-09-13T00:08:28.758210742Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3916 runtime=io.containerd.runc.v2\n" Sep 13 00:08:29.486332 kubelet[1928]: E0913 00:08:29.486278 1928 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:08:29.658556 kubelet[1928]: E0913 00:08:29.658525 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:29.664837 env[1219]: time="2025-09-13T00:08:29.664773058Z" level=info msg="CreateContainer within sandbox \"b04de65631d8045914932b063c9ce4e352faee2f72f4426531f16d96c07c6bee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:08:29.689325 env[1219]: time="2025-09-13T00:08:29.689269603Z" level=info msg="CreateContainer within sandbox \"b04de65631d8045914932b063c9ce4e352faee2f72f4426531f16d96c07c6bee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b853cd4646ed7a714119f25fa7f6df73b4ecf1c2b0d4a96105b9b22de2d21af0\"" Sep 13 00:08:29.689830 env[1219]: time="2025-09-13T00:08:29.689803650Z" level=info msg="StartContainer for \"b853cd4646ed7a714119f25fa7f6df73b4ecf1c2b0d4a96105b9b22de2d21af0\"" Sep 13 00:08:29.718630 systemd[1]: Started cri-containerd-b853cd4646ed7a714119f25fa7f6df73b4ecf1c2b0d4a96105b9b22de2d21af0.scope. Sep 13 00:08:29.767135 systemd[1]: cri-containerd-b853cd4646ed7a714119f25fa7f6df73b4ecf1c2b0d4a96105b9b22de2d21af0.scope: Deactivated successfully. Sep 13 00:08:29.768085 env[1219]: time="2025-09-13T00:08:29.768000026Z" level=info msg="StartContainer for \"b853cd4646ed7a714119f25fa7f6df73b4ecf1c2b0d4a96105b9b22de2d21af0\" returns successfully" Sep 13 00:08:29.798133 env[1219]: time="2025-09-13T00:08:29.798073561Z" level=info msg="shim disconnected" id=b853cd4646ed7a714119f25fa7f6df73b4ecf1c2b0d4a96105b9b22de2d21af0 Sep 13 00:08:29.798133 env[1219]: time="2025-09-13T00:08:29.798136882Z" level=warning msg="cleaning up after shim disconnected" id=b853cd4646ed7a714119f25fa7f6df73b4ecf1c2b0d4a96105b9b22de2d21af0 namespace=k8s.io Sep 13 00:08:29.798641 env[1219]: time="2025-09-13T00:08:29.798146962Z" level=info msg="cleaning up dead shim" Sep 13 00:08:29.818880 env[1219]: time="2025-09-13T00:08:29.818824100Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3973 runtime=io.containerd.runc.v2\n" Sep 13 00:08:29.882389 systemd[1]: run-containerd-runc-k8s.io-b853cd4646ed7a714119f25fa7f6df73b4ecf1c2b0d4a96105b9b22de2d21af0-runc.dhEoCZ.mount: Deactivated successfully. Sep 13 00:08:29.882484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b853cd4646ed7a714119f25fa7f6df73b4ecf1c2b0d4a96105b9b22de2d21af0-rootfs.mount: Deactivated successfully. Sep 13 00:08:30.666593 kubelet[1928]: E0913 00:08:30.665226 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:30.674460 env[1219]: time="2025-09-13T00:08:30.674420607Z" level=info msg="CreateContainer within sandbox \"b04de65631d8045914932b063c9ce4e352faee2f72f4426531f16d96c07c6bee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:08:30.699992 env[1219]: time="2025-09-13T00:08:30.699946077Z" level=info msg="CreateContainer within sandbox \"b04de65631d8045914932b063c9ce4e352faee2f72f4426531f16d96c07c6bee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3b2d18261a2e521f662bc8882a4c82ebeddfc7f28433a51a1ce7e9a7657c3b80\"" Sep 13 00:08:30.700876 env[1219]: time="2025-09-13T00:08:30.700839088Z" level=info msg="StartContainer for \"3b2d18261a2e521f662bc8882a4c82ebeddfc7f28433a51a1ce7e9a7657c3b80\"" Sep 13 00:08:30.723003 systemd[1]: Started cri-containerd-3b2d18261a2e521f662bc8882a4c82ebeddfc7f28433a51a1ce7e9a7657c3b80.scope. Sep 13 00:08:30.760527 systemd[1]: cri-containerd-3b2d18261a2e521f662bc8882a4c82ebeddfc7f28433a51a1ce7e9a7657c3b80.scope: Deactivated successfully. Sep 13 00:08:30.763302 env[1219]: time="2025-09-13T00:08:30.763260648Z" level=info msg="StartContainer for \"3b2d18261a2e521f662bc8882a4c82ebeddfc7f28433a51a1ce7e9a7657c3b80\" returns successfully" Sep 13 00:08:30.788563 env[1219]: time="2025-09-13T00:08:30.788517355Z" level=info msg="shim disconnected" id=3b2d18261a2e521f662bc8882a4c82ebeddfc7f28433a51a1ce7e9a7657c3b80 Sep 13 00:08:30.788800 env[1219]: time="2025-09-13T00:08:30.788779758Z" level=warning msg="cleaning up after shim disconnected" id=3b2d18261a2e521f662bc8882a4c82ebeddfc7f28433a51a1ce7e9a7657c3b80 namespace=k8s.io Sep 13 00:08:30.788876 env[1219]: time="2025-09-13T00:08:30.788851199Z" level=info msg="cleaning up dead shim" Sep 13 00:08:30.796581 env[1219]: time="2025-09-13T00:08:30.796539372Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4028 runtime=io.containerd.runc.v2\n" Sep 13 00:08:30.882464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b2d18261a2e521f662bc8882a4c82ebeddfc7f28433a51a1ce7e9a7657c3b80-rootfs.mount: Deactivated successfully. Sep 13 00:08:31.445596 kubelet[1928]: E0913 00:08:31.445467 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:31.672677 kubelet[1928]: E0913 00:08:31.671848 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:31.688701 env[1219]: time="2025-09-13T00:08:31.688647293Z" level=info msg="CreateContainer within sandbox \"b04de65631d8045914932b063c9ce4e352faee2f72f4426531f16d96c07c6bee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:08:31.715224 env[1219]: time="2025-09-13T00:08:31.715127647Z" level=info msg="CreateContainer within sandbox \"b04de65631d8045914932b063c9ce4e352faee2f72f4426531f16d96c07c6bee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"80e7c545b03e16fa100ae8d7681a8889c04c62b4796694e96a654f3150405b13\"" Sep 13 00:08:31.716071 env[1219]: time="2025-09-13T00:08:31.716041058Z" level=info msg="StartContainer for \"80e7c545b03e16fa100ae8d7681a8889c04c62b4796694e96a654f3150405b13\"" Sep 13 00:08:31.732705 systemd[1]: Started cri-containerd-80e7c545b03e16fa100ae8d7681a8889c04c62b4796694e96a654f3150405b13.scope. Sep 13 00:08:31.773731 env[1219]: time="2025-09-13T00:08:31.773655741Z" level=info msg="StartContainer for \"80e7c545b03e16fa100ae8d7681a8889c04c62b4796694e96a654f3150405b13\" returns successfully" Sep 13 00:08:32.057137 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 13 00:08:32.678961 kubelet[1928]: E0913 00:08:32.678474 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:32.709009 kubelet[1928]: I0913 00:08:32.708910 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wfltr" podStartSLOduration=5.70889342 podStartE2EDuration="5.70889342s" podCreationTimestamp="2025-09-13 00:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:08:32.708528976 +0000 UTC m=+88.380712396" watchObservedRunningTime="2025-09-13 00:08:32.70889342 +0000 UTC m=+88.381076800" Sep 13 00:08:34.000807 kubelet[1928]: E0913 00:08:34.000774 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:34.943319 systemd-networkd[1051]: lxc_health: Link UP Sep 13 00:08:34.950141 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:08:34.950199 systemd-networkd[1051]: lxc_health: Gained carrier Sep 13 00:08:35.449570 kubelet[1928]: E0913 00:08:35.449531 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:36.002104 kubelet[1928]: E0913 00:08:36.002059 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:36.688673 kubelet[1928]: E0913 00:08:36.688622 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:36.850235 systemd-networkd[1051]: lxc_health: Gained IPv6LL Sep 13 00:08:37.445436 kubelet[1928]: E0913 00:08:37.445390 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:37.690267 kubelet[1928]: E0913 00:08:37.690221 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:08:40.583607 systemd[1]: run-containerd-runc-k8s.io-80e7c545b03e16fa100ae8d7681a8889c04c62b4796694e96a654f3150405b13-runc.ZnqaVG.mount: Deactivated successfully. Sep 13 00:08:40.649241 sshd[3737]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:40.651651 systemd[1]: sshd@23-10.0.0.24:22-10.0.0.1:47604.service: Deactivated successfully. Sep 13 00:08:40.652496 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:08:40.653003 systemd-logind[1207]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:08:40.653639 systemd-logind[1207]: Removed session 24. Sep 13 00:08:41.444987 kubelet[1928]: E0913 00:08:41.444945 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"