Sep 13 00:10:20.675625 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:10:20.675645 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 12 23:05:37 -00 2025 Sep 13 00:10:20.675653 kernel: efi: EFI v2.70 by EDK II Sep 13 00:10:20.675658 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 13 00:10:20.675663 kernel: random: crng init done Sep 13 00:10:20.675669 kernel: ACPI: Early table checksum verification disabled Sep 13 00:10:20.675675 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 13 00:10:20.675682 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:10:20.675688 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:20.675693 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:20.675698 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:20.675704 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:20.675709 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:20.675715 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:20.675723 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:20.675728 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:20.675735 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:20.675740 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 13 00:10:20.675746 kernel: NUMA: Failed to initialise from firmware Sep 13 00:10:20.675752 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:10:20.675758 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Sep 13 00:10:20.675763 kernel: Zone ranges: Sep 13 00:10:20.675769 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:10:20.675776 kernel: DMA32 empty Sep 13 00:10:20.675782 kernel: Normal empty Sep 13 00:10:20.675787 kernel: Movable zone start for each node Sep 13 00:10:20.675793 kernel: Early memory node ranges Sep 13 00:10:20.675799 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 13 00:10:20.675805 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 13 00:10:20.675811 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 13 00:10:20.675816 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 13 00:10:20.675822 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 13 00:10:20.675828 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 13 00:10:20.675834 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 13 00:10:20.676032 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:10:20.676043 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 13 00:10:20.676049 kernel: psci: probing for conduit method from ACPI. Sep 13 00:10:20.676056 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:10:20.676062 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:10:20.676068 kernel: psci: Trusted OS migration not required Sep 13 00:10:20.676077 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:10:20.676084 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 00:10:20.676093 kernel: ACPI: SRAT not present Sep 13 00:10:20.676099 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Sep 13 00:10:20.676106 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Sep 13 00:10:20.676112 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 13 00:10:20.676119 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:10:20.676125 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:10:20.676132 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:10:20.676138 kernel: CPU features: detected: Spectre-v4 Sep 13 00:10:20.676144 kernel: CPU features: detected: Spectre-BHB Sep 13 00:10:20.676152 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:10:20.676158 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:10:20.676164 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:10:20.676171 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:10:20.676177 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 13 00:10:20.676183 kernel: Policy zone: DMA Sep 13 00:10:20.676205 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:10:20.676690 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:10:20.676705 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:10:20.676712 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:10:20.676718 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:10:20.676729 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Sep 13 00:10:20.676735 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:10:20.676741 kernel: trace event string verifier disabled Sep 13 00:10:20.676748 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:10:20.676754 kernel: rcu: RCU event tracing is enabled. Sep 13 00:10:20.676761 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:10:20.676767 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:10:20.676773 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:10:20.676780 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:10:20.676786 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:10:20.676792 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:10:20.676799 kernel: GICv3: 256 SPIs implemented Sep 13 00:10:20.676805 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:10:20.676811 kernel: GICv3: Distributor has no Range Selector support Sep 13 00:10:20.676818 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:10:20.676824 kernel: GICv3: 16 PPIs implemented Sep 13 00:10:20.676830 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 00:10:20.676836 kernel: ACPI: SRAT not present Sep 13 00:10:20.676842 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 00:10:20.676848 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:10:20.676864 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:10:20.676871 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 13 00:10:20.676877 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 13 00:10:20.676885 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:10:20.676891 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:10:20.676898 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:10:20.676904 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:10:20.676910 kernel: arm-pv: using stolen time PV Sep 13 00:10:20.676917 kernel: Console: colour dummy device 80x25 Sep 13 00:10:20.676923 kernel: ACPI: Core revision 20210730 Sep 13 00:10:20.676930 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:10:20.676936 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:10:20.676943 kernel: LSM: Security Framework initializing Sep 13 00:10:20.676950 kernel: SELinux: Initializing. Sep 13 00:10:20.676957 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:10:20.676963 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:10:20.676969 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:10:20.676976 kernel: Platform MSI: ITS@0x8080000 domain created Sep 13 00:10:20.676982 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 13 00:10:20.676988 kernel: Remapping and enabling EFI services. Sep 13 00:10:20.676994 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:10:20.677001 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:10:20.677008 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 00:10:20.677015 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 13 00:10:20.677022 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:10:20.677028 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:10:20.677034 kernel: Detected PIPT I-cache on CPU2 Sep 13 00:10:20.677041 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 13 00:10:20.677048 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 13 00:10:20.677054 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:10:20.677060 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 13 00:10:20.677067 kernel: Detected PIPT I-cache on CPU3 Sep 13 00:10:20.677074 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 13 00:10:20.677081 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 13 00:10:20.677087 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:10:20.677093 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 13 00:10:20.677104 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:10:20.677112 kernel: SMP: Total of 4 processors activated. Sep 13 00:10:20.677119 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:10:20.677125 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:10:20.677132 kernel: CPU features: detected: Common not Private translations Sep 13 00:10:20.677139 kernel: CPU features: detected: CRC32 instructions Sep 13 00:10:20.677145 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:10:20.677152 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:10:20.677160 kernel: CPU features: detected: Privileged Access Never Sep 13 00:10:20.677167 kernel: CPU features: detected: RAS Extension Support Sep 13 00:10:20.677173 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 00:10:20.677180 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:10:20.677187 kernel: alternatives: patching kernel code Sep 13 00:10:20.677203 kernel: devtmpfs: initialized Sep 13 00:10:20.677210 kernel: KASLR enabled Sep 13 00:10:20.677217 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:10:20.677224 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:10:20.677230 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:10:20.677237 kernel: SMBIOS 3.0.0 present. Sep 13 00:10:20.677243 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 13 00:10:20.677250 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:10:20.677257 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:10:20.677265 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:10:20.677272 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:10:20.677279 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:10:20.677285 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Sep 13 00:10:20.677292 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:10:20.677298 kernel: cpuidle: using governor menu Sep 13 00:10:20.677306 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:10:20.677312 kernel: ASID allocator initialised with 32768 entries Sep 13 00:10:20.677319 kernel: ACPI: bus type PCI registered Sep 13 00:10:20.677327 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:10:20.677333 kernel: Serial: AMBA PL011 UART driver Sep 13 00:10:20.677340 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:10:20.677347 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:10:20.677353 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:10:20.677360 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:10:20.677367 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:10:20.677374 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:10:20.677380 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:10:20.677389 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:10:20.677395 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:10:20.677402 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:10:20.677408 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:10:20.677415 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:10:20.677422 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:10:20.677428 kernel: ACPI: Interpreter enabled Sep 13 00:10:20.677435 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:10:20.677441 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:10:20.677450 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:10:20.677456 kernel: printk: console [ttyAMA0] enabled Sep 13 00:10:20.677463 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:10:20.677603 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:10:20.677667 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:10:20.677727 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:10:20.677788 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 00:10:20.677849 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 00:10:20.677869 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 00:10:20.677876 kernel: PCI host bridge to bus 0000:00 Sep 13 00:10:20.677950 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 00:10:20.678008 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:10:20.678063 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 00:10:20.678117 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:10:20.678231 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 13 00:10:20.678315 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:10:20.678380 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 13 00:10:20.678443 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 13 00:10:20.678504 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:10:20.678565 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:10:20.678626 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 13 00:10:20.678692 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 13 00:10:20.678750 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 00:10:20.678804 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:10:20.678872 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 00:10:20.678882 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:10:20.678890 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:10:20.678896 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:10:20.678906 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:10:20.678913 kernel: iommu: Default domain type: Translated Sep 13 00:10:20.678920 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:10:20.678926 kernel: vgaarb: loaded Sep 13 00:10:20.678933 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:10:20.678940 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:10:20.678946 kernel: PTP clock support registered Sep 13 00:10:20.678953 kernel: Registered efivars operations Sep 13 00:10:20.678959 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:10:20.678966 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:10:20.678975 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:10:20.678982 kernel: pnp: PnP ACPI init Sep 13 00:10:20.679056 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 00:10:20.679066 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:10:20.679073 kernel: NET: Registered PF_INET protocol family Sep 13 00:10:20.679079 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:10:20.679086 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:10:20.679093 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:10:20.679102 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:10:20.679108 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:10:20.679115 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:10:20.679122 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:10:20.679129 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:10:20.679135 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:10:20.679142 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:10:20.679149 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 13 00:10:20.679155 kernel: kvm [1]: HYP mode not available Sep 13 00:10:20.679163 kernel: Initialise system trusted keyrings Sep 13 00:10:20.679170 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:10:20.679176 kernel: Key type asymmetric registered Sep 13 00:10:20.679183 kernel: Asymmetric key parser 'x509' registered Sep 13 00:10:20.679190 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:10:20.679228 kernel: io scheduler mq-deadline registered Sep 13 00:10:20.679234 kernel: io scheduler kyber registered Sep 13 00:10:20.679241 kernel: io scheduler bfq registered Sep 13 00:10:20.679248 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:10:20.679256 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:10:20.679264 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:10:20.679333 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 13 00:10:20.679342 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:10:20.679349 kernel: thunder_xcv, ver 1.0 Sep 13 00:10:20.679355 kernel: thunder_bgx, ver 1.0 Sep 13 00:10:20.679362 kernel: nicpf, ver 1.0 Sep 13 00:10:20.679369 kernel: nicvf, ver 1.0 Sep 13 00:10:20.679438 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:10:20.679498 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:10:20 UTC (1757722220) Sep 13 00:10:20.679507 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:10:20.679514 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:10:20.679520 kernel: Segment Routing with IPv6 Sep 13 00:10:20.679527 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:10:20.679534 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:10:20.679540 kernel: Key type dns_resolver registered Sep 13 00:10:20.679547 kernel: registered taskstats version 1 Sep 13 00:10:20.679555 kernel: Loading compiled-in X.509 certificates Sep 13 00:10:20.679562 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 47ac98e9306f36eebe4291d409359a5a5d0c2b9c' Sep 13 00:10:20.679568 kernel: Key type .fscrypt registered Sep 13 00:10:20.679575 kernel: Key type fscrypt-provisioning registered Sep 13 00:10:20.679581 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:10:20.679588 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:10:20.679595 kernel: ima: No architecture policies found Sep 13 00:10:20.679602 kernel: clk: Disabling unused clocks Sep 13 00:10:20.679608 kernel: Freeing unused kernel memory: 36416K Sep 13 00:10:20.679617 kernel: Run /init as init process Sep 13 00:10:20.679623 kernel: with arguments: Sep 13 00:10:20.679630 kernel: /init Sep 13 00:10:20.679636 kernel: with environment: Sep 13 00:10:20.679643 kernel: HOME=/ Sep 13 00:10:20.679649 kernel: TERM=linux Sep 13 00:10:20.679656 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:10:20.679665 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:10:20.679675 systemd[1]: Detected virtualization kvm. Sep 13 00:10:20.679682 systemd[1]: Detected architecture arm64. Sep 13 00:10:20.679703 systemd[1]: Running in initrd. Sep 13 00:10:20.679710 systemd[1]: No hostname configured, using default hostname. Sep 13 00:10:20.679717 systemd[1]: Hostname set to . Sep 13 00:10:20.679725 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:10:20.679732 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:10:20.679739 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:10:20.679748 systemd[1]: Reached target cryptsetup.target. Sep 13 00:10:20.679755 systemd[1]: Reached target paths.target. Sep 13 00:10:20.679762 systemd[1]: Reached target slices.target. Sep 13 00:10:20.679769 systemd[1]: Reached target swap.target. Sep 13 00:10:20.679776 systemd[1]: Reached target timers.target. Sep 13 00:10:20.679783 systemd[1]: Listening on iscsid.socket. Sep 13 00:10:20.679790 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:10:20.679799 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:10:20.679806 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:10:20.679813 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:10:20.679820 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:10:20.679828 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:10:20.679835 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:10:20.679842 systemd[1]: Reached target sockets.target. Sep 13 00:10:20.679849 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:10:20.679866 systemd[1]: Finished network-cleanup.service. Sep 13 00:10:20.679876 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:10:20.679883 systemd[1]: Starting systemd-journald.service... Sep 13 00:10:20.679890 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:10:20.679897 systemd[1]: Starting systemd-resolved.service... Sep 13 00:10:20.679906 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:10:20.679913 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:10:20.679921 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:10:20.679928 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:10:20.679939 systemd-journald[291]: Journal started Sep 13 00:10:20.679983 systemd-journald[291]: Runtime Journal (/run/log/journal/9cb0785f56e140f693171db6ff116f92) is 6.0M, max 48.7M, 42.6M free. Sep 13 00:10:20.676384 systemd-modules-load[292]: Inserted module 'overlay' Sep 13 00:10:20.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.683930 systemd[1]: Started systemd-journald.service. Sep 13 00:10:20.683955 kernel: audit: type=1130 audit(1757722220.681:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.684264 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:10:20.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.687693 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:10:20.690845 kernel: audit: type=1130 audit(1757722220.684:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.690867 kernel: audit: type=1130 audit(1757722220.688:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.691613 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:10:20.696304 systemd-resolved[293]: Positive Trust Anchors: Sep 13 00:10:20.696318 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:10:20.696344 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:10:20.700502 systemd-resolved[293]: Defaulting to hostname 'linux'. Sep 13 00:10:20.708694 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:10:20.708714 kernel: audit: type=1130 audit(1757722220.705:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.708724 kernel: Bridge firewalling registered Sep 13 00:10:20.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.701453 systemd[1]: Started systemd-resolved.service. Sep 13 00:10:20.708353 systemd[1]: Reached target nss-lookup.target. Sep 13 00:10:20.709168 systemd-modules-load[292]: Inserted module 'br_netfilter' Sep 13 00:10:20.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.710971 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:10:20.715152 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:10:20.715774 kernel: audit: type=1130 audit(1757722220.711:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.720242 kernel: SCSI subsystem initialized Sep 13 00:10:20.723986 dracut-cmdline[307]: dracut-dracut-053 Sep 13 00:10:20.726179 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:10:20.732135 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:10:20.732157 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:10:20.732167 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:10:20.731914 systemd-modules-load[292]: Inserted module 'dm_multipath' Sep 13 00:10:20.732706 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:10:20.734921 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:10:20.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.739622 kernel: audit: type=1130 audit(1757722220.734:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.743299 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:10:20.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.747215 kernel: audit: type=1130 audit(1757722220.743:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.791219 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:10:20.802214 kernel: iscsi: registered transport (tcp) Sep 13 00:10:20.817211 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:10:20.817227 kernel: QLogic iSCSI HBA Driver Sep 13 00:10:20.850957 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:10:20.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.852477 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:10:20.855028 kernel: audit: type=1130 audit(1757722220.851:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:20.894214 kernel: raid6: neonx8 gen() 13741 MB/s Sep 13 00:10:20.911207 kernel: raid6: neonx8 xor() 10790 MB/s Sep 13 00:10:20.928207 kernel: raid6: neonx4 gen() 13514 MB/s Sep 13 00:10:20.945207 kernel: raid6: neonx4 xor() 11039 MB/s Sep 13 00:10:20.962207 kernel: raid6: neonx2 gen() 12969 MB/s Sep 13 00:10:20.979205 kernel: raid6: neonx2 xor() 10252 MB/s Sep 13 00:10:20.996206 kernel: raid6: neonx1 gen() 10568 MB/s Sep 13 00:10:21.013222 kernel: raid6: neonx1 xor() 8770 MB/s Sep 13 00:10:21.030223 kernel: raid6: int64x8 gen() 6269 MB/s Sep 13 00:10:21.047218 kernel: raid6: int64x8 xor() 3543 MB/s Sep 13 00:10:21.064210 kernel: raid6: int64x4 gen() 7207 MB/s Sep 13 00:10:21.081211 kernel: raid6: int64x4 xor() 3856 MB/s Sep 13 00:10:21.098217 kernel: raid6: int64x2 gen() 6150 MB/s Sep 13 00:10:21.115219 kernel: raid6: int64x2 xor() 3320 MB/s Sep 13 00:10:21.132216 kernel: raid6: int64x1 gen() 5041 MB/s Sep 13 00:10:21.149459 kernel: raid6: int64x1 xor() 2646 MB/s Sep 13 00:10:21.149483 kernel: raid6: using algorithm neonx8 gen() 13741 MB/s Sep 13 00:10:21.149500 kernel: raid6: .... xor() 10790 MB/s, rmw enabled Sep 13 00:10:21.149517 kernel: raid6: using neon recovery algorithm Sep 13 00:10:21.160211 kernel: xor: measuring software checksum speed Sep 13 00:10:21.160227 kernel: 8regs : 16283 MB/sec Sep 13 00:10:21.161701 kernel: 32regs : 19249 MB/sec Sep 13 00:10:21.161719 kernel: arm64_neon : 27860 MB/sec Sep 13 00:10:21.161728 kernel: xor: using function: arm64_neon (27860 MB/sec) Sep 13 00:10:21.213220 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 13 00:10:21.223526 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:10:21.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:21.226000 audit: BPF prog-id=7 op=LOAD Sep 13 00:10:21.226000 audit: BPF prog-id=8 op=LOAD Sep 13 00:10:21.227041 systemd[1]: Starting systemd-udevd.service... Sep 13 00:10:21.228155 kernel: audit: type=1130 audit(1757722221.223:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:21.238811 systemd-udevd[491]: Using default interface naming scheme 'v252'. Sep 13 00:10:21.242065 systemd[1]: Started systemd-udevd.service. Sep 13 00:10:21.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:21.244089 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:10:21.254170 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Sep 13 00:10:21.279441 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:10:21.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:21.280779 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:10:21.314167 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:10:21.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:21.339617 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:10:21.344952 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:10:21.344967 kernel: GPT:9289727 != 19775487 Sep 13 00:10:21.344976 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:10:21.344990 kernel: GPT:9289727 != 19775487 Sep 13 00:10:21.344998 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:10:21.345006 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:10:21.357215 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (554) Sep 13 00:10:21.362632 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:10:21.363500 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:10:21.367651 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:10:21.373381 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:10:21.376443 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:10:21.377867 systemd[1]: Starting disk-uuid.service... Sep 13 00:10:21.383427 disk-uuid[561]: Primary Header is updated. Sep 13 00:10:21.383427 disk-uuid[561]: Secondary Entries is updated. Sep 13 00:10:21.383427 disk-uuid[561]: Secondary Header is updated. Sep 13 00:10:21.386215 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:10:22.391216 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:10:22.391484 disk-uuid[562]: The operation has completed successfully. Sep 13 00:10:22.413914 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:10:22.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.414010 systemd[1]: Finished disk-uuid.service. Sep 13 00:10:22.417831 systemd[1]: Starting verity-setup.service... Sep 13 00:10:22.429212 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:10:22.446965 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:10:22.448941 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:10:22.450652 systemd[1]: Finished verity-setup.service. Sep 13 00:10:22.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.493789 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:10:22.494823 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:10:22.494500 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:10:22.495253 systemd[1]: Starting ignition-setup.service... Sep 13 00:10:22.496969 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:10:22.503418 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:10:22.503454 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:10:22.503468 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:10:22.511643 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:10:22.516921 systemd[1]: Finished ignition-setup.service. Sep 13 00:10:22.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.518391 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:10:22.561383 ignition[647]: Ignition 2.14.0 Sep 13 00:10:22.562097 ignition[647]: Stage: fetch-offline Sep 13 00:10:22.562718 ignition[647]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:22.563403 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:10:22.564316 ignition[647]: parsed url from cmdline: "" Sep 13 00:10:22.564372 ignition[647]: no config URL provided Sep 13 00:10:22.564898 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:10:22.565712 ignition[647]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:10:22.566391 ignition[647]: op(1): [started] loading QEMU firmware config module Sep 13 00:10:22.567110 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:10:22.570211 ignition[647]: op(1): [finished] loading QEMU firmware config module Sep 13 00:10:22.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.575000 audit: BPF prog-id=9 op=LOAD Sep 13 00:10:22.574014 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:10:22.576084 systemd[1]: Starting systemd-networkd.service... Sep 13 00:10:22.594470 systemd-networkd[739]: lo: Link UP Sep 13 00:10:22.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.594484 systemd-networkd[739]: lo: Gained carrier Sep 13 00:10:22.594893 systemd-networkd[739]: Enumeration completed Sep 13 00:10:22.595078 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:10:22.595374 systemd[1]: Started systemd-networkd.service. Sep 13 00:10:22.595994 systemd-networkd[739]: eth0: Link UP Sep 13 00:10:22.595998 systemd-networkd[739]: eth0: Gained carrier Sep 13 00:10:22.596755 systemd[1]: Reached target network.target. Sep 13 00:10:22.598132 systemd[1]: Starting iscsiuio.service... Sep 13 00:10:22.605068 systemd[1]: Started iscsiuio.service. Sep 13 00:10:22.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.606909 systemd[1]: Starting iscsid.service... Sep 13 00:10:22.609772 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:10:22.609772 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:10:22.609772 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:10:22.609772 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:10:22.609772 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:10:22.609772 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:10:22.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.612644 systemd[1]: Started iscsid.service. Sep 13 00:10:22.617052 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:10:22.617096 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:10:22.627129 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:10:22.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.628022 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:10:22.629157 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:10:22.630500 systemd[1]: Reached target remote-fs.target. Sep 13 00:10:22.632480 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:10:22.633642 ignition[647]: parsing config with SHA512: c3b8301ae4489ea2d319c00d8293c697850adf59e9548df94f681d5724cf67b6eb7847475e83c144dc6ea9f44372a3c7ed468370fc8e8853a820d83e0ab781e0 Sep 13 00:10:22.640150 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:10:22.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.642876 unknown[647]: fetched base config from "system" Sep 13 00:10:22.642889 unknown[647]: fetched user config from "qemu" Sep 13 00:10:22.643521 ignition[647]: fetch-offline: fetch-offline passed Sep 13 00:10:22.643579 ignition[647]: Ignition finished successfully Sep 13 00:10:22.645433 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:10:22.646180 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:10:22.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.646904 systemd[1]: Starting ignition-kargs.service... Sep 13 00:10:22.655597 ignition[760]: Ignition 2.14.0 Sep 13 00:10:22.655606 ignition[760]: Stage: kargs Sep 13 00:10:22.655700 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:22.655710 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:10:22.656578 ignition[760]: kargs: kargs passed Sep 13 00:10:22.658266 systemd[1]: Finished ignition-kargs.service. Sep 13 00:10:22.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.656623 ignition[760]: Ignition finished successfully Sep 13 00:10:22.659856 systemd[1]: Starting ignition-disks.service... Sep 13 00:10:22.666716 ignition[766]: Ignition 2.14.0 Sep 13 00:10:22.666727 ignition[766]: Stage: disks Sep 13 00:10:22.666827 ignition[766]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:22.666846 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:10:22.669284 systemd[1]: Finished ignition-disks.service. Sep 13 00:10:22.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.667804 ignition[766]: disks: disks passed Sep 13 00:10:22.670659 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:10:22.667865 ignition[766]: Ignition finished successfully Sep 13 00:10:22.671648 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:10:22.672579 systemd[1]: Reached target local-fs.target. Sep 13 00:10:22.673636 systemd[1]: Reached target sysinit.target. Sep 13 00:10:22.674724 systemd[1]: Reached target basic.target. Sep 13 00:10:22.676618 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:10:22.686926 systemd-fsck[774]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 13 00:10:22.689696 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:10:22.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.691138 systemd[1]: Mounting sysroot.mount... Sep 13 00:10:22.696149 systemd[1]: Mounted sysroot.mount. Sep 13 00:10:22.697147 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:10:22.696793 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:10:22.698717 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:10:22.699480 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:10:22.699517 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:10:22.699539 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:10:22.701437 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:10:22.703622 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:10:22.707910 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:10:22.712476 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:10:22.716071 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:10:22.719994 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:10:22.745970 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:10:22.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.747453 systemd[1]: Starting ignition-mount.service... Sep 13 00:10:22.748637 systemd[1]: Starting sysroot-boot.service... Sep 13 00:10:22.753892 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:10:22.762287 ignition[827]: INFO : Ignition 2.14.0 Sep 13 00:10:22.762287 ignition[827]: INFO : Stage: mount Sep 13 00:10:22.764771 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:22.764771 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:10:22.764771 ignition[827]: INFO : mount: mount passed Sep 13 00:10:22.764771 ignition[827]: INFO : Ignition finished successfully Sep 13 00:10:22.766737 systemd[1]: Finished ignition-mount.service. Sep 13 00:10:22.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:22.767795 systemd[1]: Finished sysroot-boot.service. Sep 13 00:10:23.456875 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:10:23.462220 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (836) Sep 13 00:10:23.463430 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:10:23.463448 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:10:23.463458 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:10:23.466762 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:10:23.468134 systemd[1]: Starting ignition-files.service... Sep 13 00:10:23.481539 ignition[856]: INFO : Ignition 2.14.0 Sep 13 00:10:23.481539 ignition[856]: INFO : Stage: files Sep 13 00:10:23.482753 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:23.482753 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:10:23.482753 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:10:23.485695 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:10:23.485695 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:10:23.485695 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:10:23.485695 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:10:23.490011 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:10:23.490011 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:10:23.490011 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:10:23.490011 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:10:23.490011 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 00:10:23.485900 unknown[856]: wrote ssh authorized keys file for user: core Sep 13 00:10:23.528590 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:10:23.964801 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:10:23.966453 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:10:23.967978 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 13 00:10:24.191777 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 13 00:10:24.339295 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:10:24.339295 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:10:24.344540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:10:24.344540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:10:24.344540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:10:24.344540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:10:24.344540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:10:24.344540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:10:24.344540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:10:24.344540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:10:24.344540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:10:24.344540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:10:24.344540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:10:24.344540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:10:24.344540 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 00:10:24.467431 systemd-networkd[739]: eth0: Gained IPv6LL Sep 13 00:10:24.641517 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 13 00:10:25.131536 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:10:25.131536 ignition[856]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:10:25.134503 ignition[856]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:10:25.161427 ignition[856]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:10:25.162555 ignition[856]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:10:25.162555 ignition[856]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:10:25.162555 ignition[856]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:10:25.162555 ignition[856]: INFO : files: files passed Sep 13 00:10:25.162555 ignition[856]: INFO : Ignition finished successfully Sep 13 00:10:25.172280 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 00:10:25.172301 kernel: audit: type=1130 audit(1757722225.164:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.163096 systemd[1]: Finished ignition-files.service. Sep 13 00:10:25.177169 kernel: audit: type=1130 audit(1757722225.172:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.177190 kernel: audit: type=1131 audit(1757722225.172:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.165413 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:10:25.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.166604 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:10:25.182501 kernel: audit: type=1130 audit(1757722225.177:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.182519 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 13 00:10:25.168438 systemd[1]: Starting ignition-quench.service... Sep 13 00:10:25.184702 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:10:25.172113 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:10:25.172188 systemd[1]: Finished ignition-quench.service. Sep 13 00:10:25.176658 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:10:25.177970 systemd[1]: Reached target ignition-complete.target. Sep 13 00:10:25.181853 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:10:25.194147 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:10:25.194238 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:10:25.199596 kernel: audit: type=1130 audit(1757722225.195:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.199613 kernel: audit: type=1131 audit(1757722225.195:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.195538 systemd[1]: Reached target initrd-fs.target. Sep 13 00:10:25.200132 systemd[1]: Reached target initrd.target. Sep 13 00:10:25.201187 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:10:25.201865 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:10:25.211665 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:10:25.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.212947 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:10:25.215556 kernel: audit: type=1130 audit(1757722225.212:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.220361 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:10:25.221031 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:10:25.222092 systemd[1]: Stopped target timers.target. Sep 13 00:10:25.223132 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:10:25.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.223245 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:10:25.227676 kernel: audit: type=1131 audit(1757722225.223:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.224203 systemd[1]: Stopped target initrd.target. Sep 13 00:10:25.227247 systemd[1]: Stopped target basic.target. Sep 13 00:10:25.228190 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:10:25.229211 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:10:25.230227 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:10:25.231357 systemd[1]: Stopped target remote-fs.target. Sep 13 00:10:25.232340 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:10:25.233386 systemd[1]: Stopped target sysinit.target. Sep 13 00:10:25.234316 systemd[1]: Stopped target local-fs.target. Sep 13 00:10:25.235290 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:10:25.236260 systemd[1]: Stopped target swap.target. Sep 13 00:10:25.241213 kernel: audit: type=1131 audit(1757722225.238:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.237515 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:10:25.237620 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:10:25.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.238797 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:10:25.246562 kernel: audit: type=1131 audit(1757722225.242:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.241823 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:10:25.241920 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:10:25.243001 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:10:25.243089 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:10:25.246199 systemd[1]: Stopped target paths.target. Sep 13 00:10:25.247054 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:10:25.252221 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:10:25.252946 systemd[1]: Stopped target slices.target. Sep 13 00:10:25.253946 systemd[1]: Stopped target sockets.target. Sep 13 00:10:25.254849 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:10:25.254918 systemd[1]: Closed iscsid.socket. Sep 13 00:10:25.255706 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:10:25.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.255763 systemd[1]: Closed iscsiuio.socket. Sep 13 00:10:25.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.256689 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:10:25.256778 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:10:25.257915 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:10:25.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.257996 systemd[1]: Stopped ignition-files.service. Sep 13 00:10:25.259845 systemd[1]: Stopping ignition-mount.service... Sep 13 00:10:25.260607 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:10:25.260715 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:10:25.262541 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:10:25.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.263490 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:10:25.263588 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:10:25.264688 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:10:25.264769 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:10:25.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.270400 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:10:25.272399 ignition[896]: INFO : Ignition 2.14.0 Sep 13 00:10:25.272399 ignition[896]: INFO : Stage: umount Sep 13 00:10:25.272399 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:25.272399 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:10:25.272399 ignition[896]: INFO : umount: umount passed Sep 13 00:10:25.272399 ignition[896]: INFO : Ignition finished successfully Sep 13 00:10:25.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.270492 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:10:25.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.272127 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:10:25.273693 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:10:25.273767 systemd[1]: Stopped ignition-mount.service. Sep 13 00:10:25.274661 systemd[1]: Stopped target network.target. Sep 13 00:10:25.275871 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:10:25.275917 systemd[1]: Stopped ignition-disks.service. Sep 13 00:10:25.277219 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:10:25.277253 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:10:25.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.278355 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:10:25.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.278389 systemd[1]: Stopped ignition-setup.service. Sep 13 00:10:25.280135 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:10:25.281111 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:10:25.286276 systemd-networkd[739]: eth0: DHCPv6 lease lost Sep 13 00:10:25.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.287480 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:10:25.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.296000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:10:25.296000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:10:25.287571 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:10:25.288587 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:10:25.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.288669 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:10:25.290044 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:10:25.290071 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:10:25.292423 systemd[1]: Stopping network-cleanup.service... Sep 13 00:10:25.293423 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:10:25.293476 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:10:25.294677 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:10:25.294714 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:10:25.296901 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:10:25.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.296943 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:10:25.297941 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:10:25.303389 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:10:25.306326 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:10:25.306410 systemd[1]: Stopped network-cleanup.service. Sep 13 00:10:25.310350 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:10:25.311625 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:10:25.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.313548 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:10:25.313580 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:10:25.314352 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:10:25.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.314379 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:10:25.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.316476 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:10:25.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.316521 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:10:25.317776 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:10:25.317821 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:10:25.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.319384 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:10:25.319424 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:10:25.321138 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:10:25.322168 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:10:25.322221 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:10:25.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.326470 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:10:25.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.326566 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:10:25.327684 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:10:25.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:25.327760 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:10:25.328911 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:10:25.329959 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:10:25.329999 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:10:25.331639 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:10:25.338109 systemd[1]: Switching root. Sep 13 00:10:25.339000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:10:25.339000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:10:25.340000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:10:25.340000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:10:25.340000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:10:25.358007 iscsid[744]: iscsid shutting down. Sep 13 00:10:25.358580 systemd-journald[291]: Received SIGTERM from PID 1 (n/a). Sep 13 00:10:25.358614 systemd-journald[291]: Journal stopped Sep 13 00:10:27.445427 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:10:27.445483 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:10:27.445494 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:10:27.445504 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:10:27.445514 kernel: SELinux: policy capability open_perms=1 Sep 13 00:10:27.445524 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:10:27.445537 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:10:27.445550 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:10:27.445560 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:10:27.445569 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:10:27.445579 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:10:27.445590 systemd[1]: Successfully loaded SELinux policy in 32.616ms. Sep 13 00:10:27.445609 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.598ms. Sep 13 00:10:27.445621 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:10:27.445632 systemd[1]: Detected virtualization kvm. Sep 13 00:10:27.445641 systemd[1]: Detected architecture arm64. Sep 13 00:10:27.445654 systemd[1]: Detected first boot. Sep 13 00:10:27.445664 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:10:27.445676 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:10:27.445686 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:10:27.445697 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:10:27.445708 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:10:27.445720 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:27.445731 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:10:27.445741 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:10:27.445751 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:10:27.445765 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:10:27.445777 systemd[1]: Created slice system-getty.slice. Sep 13 00:10:27.445787 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:10:27.445806 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:10:27.445817 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:10:27.445828 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:10:27.445839 systemd[1]: Created slice user.slice. Sep 13 00:10:27.445850 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:10:27.445860 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:10:27.445871 systemd[1]: Set up automount boot.automount. Sep 13 00:10:27.445883 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:10:27.445893 systemd[1]: Reached target integritysetup.target. Sep 13 00:10:27.445904 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:10:27.445913 systemd[1]: Reached target remote-fs.target. Sep 13 00:10:27.445924 systemd[1]: Reached target slices.target. Sep 13 00:10:27.445934 systemd[1]: Reached target swap.target. Sep 13 00:10:27.445945 systemd[1]: Reached target torcx.target. Sep 13 00:10:27.445956 systemd[1]: Reached target veritysetup.target. Sep 13 00:10:27.445968 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:10:27.445978 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:10:27.445988 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:10:27.445998 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:10:27.446009 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:10:27.446019 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:10:27.446029 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:10:27.446039 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:10:27.446050 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:10:27.446061 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:10:27.446072 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:10:27.446082 systemd[1]: Mounting media.mount... Sep 13 00:10:27.446091 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:10:27.446101 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:10:27.446111 systemd[1]: Mounting tmp.mount... Sep 13 00:10:27.446122 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:10:27.446132 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:10:27.446142 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:10:27.446152 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:10:27.446163 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:10:27.446174 systemd[1]: Starting modprobe@drm.service... Sep 13 00:10:27.446184 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:10:27.446200 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:10:27.446215 systemd[1]: Starting modprobe@loop.service... Sep 13 00:10:27.446225 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:10:27.446236 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:10:27.446246 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:10:27.446257 systemd[1]: Starting systemd-journald.service... Sep 13 00:10:27.446268 kernel: fuse: init (API version 7.34) Sep 13 00:10:27.446280 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:10:27.446292 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:10:27.446306 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:10:27.446316 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:10:27.446326 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:10:27.446336 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:10:27.446346 kernel: loop: module loaded Sep 13 00:10:27.446356 systemd[1]: Mounted media.mount. Sep 13 00:10:27.446366 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:10:27.446376 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:10:27.446387 systemd[1]: Mounted tmp.mount. Sep 13 00:10:27.446397 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:10:27.446408 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:10:27.446418 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:10:27.446431 systemd-journald[1029]: Journal started Sep 13 00:10:27.446472 systemd-journald[1029]: Runtime Journal (/run/log/journal/9cb0785f56e140f693171db6ff116f92) is 6.0M, max 48.7M, 42.6M free. Sep 13 00:10:27.443000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:10:27.443000 audit[1029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffffb218630 a2=4000 a3=1 items=0 ppid=1 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:10:27.443000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:10:27.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.448270 systemd[1]: Started systemd-journald.service. Sep 13 00:10:27.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.449207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:10:27.449433 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:10:27.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.450284 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:10:27.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.451416 systemd[1]: Finished modprobe@drm.service. Sep 13 00:10:27.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.452393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:10:27.452598 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:10:27.453613 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:10:27.453871 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:10:27.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.454932 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:10:27.455285 systemd[1]: Finished modprobe@loop.service. Sep 13 00:10:27.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.456660 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:10:27.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.457933 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:10:27.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.459185 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:10:27.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.460353 systemd[1]: Reached target network-pre.target. Sep 13 00:10:27.462238 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:10:27.463990 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:10:27.464718 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:10:27.466386 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:10:27.468767 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:10:27.471353 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:10:27.472592 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:10:27.473508 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:10:27.474772 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:10:27.478576 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:10:27.479460 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:10:27.487694 systemd-journald[1029]: Time spent on flushing to /var/log/journal/9cb0785f56e140f693171db6ff116f92 is 14.393ms for 925 entries. Sep 13 00:10:27.487694 systemd-journald[1029]: System Journal (/var/log/journal/9cb0785f56e140f693171db6ff116f92) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:10:27.516035 systemd-journald[1029]: Received client request to flush runtime journal. Sep 13 00:10:27.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.487134 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:10:27.489527 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:10:27.496377 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:10:27.498203 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:10:27.503022 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:10:27.508095 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:10:27.509918 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:10:27.517475 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:10:27.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.521263 udevadm[1082]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:10:27.527757 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:10:27.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.529666 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:10:27.548571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:10:27.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.861577 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:10:27.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.863492 systemd[1]: Starting systemd-udevd.service... Sep 13 00:10:27.879056 systemd-udevd[1090]: Using default interface naming scheme 'v252'. Sep 13 00:10:27.890003 systemd[1]: Started systemd-udevd.service. Sep 13 00:10:27.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.892092 systemd[1]: Starting systemd-networkd.service... Sep 13 00:10:27.903692 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:10:27.914012 systemd[1]: Found device dev-ttyAMA0.device. Sep 13 00:10:27.932887 systemd[1]: Started systemd-userdbd.service. Sep 13 00:10:27.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.968761 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:10:27.985824 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:10:27.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:27.987751 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:10:27.998632 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:10:28.007697 systemd-networkd[1097]: lo: Link UP Sep 13 00:10:28.007706 systemd-networkd[1097]: lo: Gained carrier Sep 13 00:10:28.008070 systemd-networkd[1097]: Enumeration completed Sep 13 00:10:28.008174 systemd-networkd[1097]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:10:28.008229 systemd[1]: Started systemd-networkd.service. Sep 13 00:10:28.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.009890 systemd-networkd[1097]: eth0: Link UP Sep 13 00:10:28.009902 systemd-networkd[1097]: eth0: Gained carrier Sep 13 00:10:28.021101 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:10:28.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.022083 systemd[1]: Reached target cryptsetup.target. Sep 13 00:10:28.024876 systemd[1]: Starting lvm2-activation.service... Sep 13 00:10:28.030319 systemd-networkd[1097]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:10:28.030871 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:10:28.061305 systemd[1]: Finished lvm2-activation.service. Sep 13 00:10:28.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.062216 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:10:28.063025 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:10:28.063060 systemd[1]: Reached target local-fs.target. Sep 13 00:10:28.063852 systemd[1]: Reached target machines.target. Sep 13 00:10:28.065911 systemd[1]: Starting ldconfig.service... Sep 13 00:10:28.067565 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.067691 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:10:28.069451 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:10:28.071446 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:10:28.075055 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:10:28.077242 systemd[1]: Starting systemd-sysext.service... Sep 13 00:10:28.078409 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1129 (bootctl) Sep 13 00:10:28.079443 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:10:28.086974 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:10:28.092694 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:10:28.092974 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:10:28.100473 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:10:28.105235 kernel: loop0: detected capacity change from 0 to 203944 Sep 13 00:10:28.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.151983 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:10:28.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.158218 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:10:28.182095 systemd-fsck[1143]: fsck.fat 4.2 (2021-01-31) Sep 13 00:10:28.182095 systemd-fsck[1143]: /dev/vda1: 236 files, 117310/258078 clusters Sep 13 00:10:28.182388 kernel: loop1: detected capacity change from 0 to 203944 Sep 13 00:10:28.184954 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:10:28.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.192413 (sd-sysext)[1147]: Using extensions 'kubernetes'. Sep 13 00:10:28.192897 (sd-sysext)[1147]: Merged extensions into '/usr'. Sep 13 00:10:28.210538 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.211909 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:10:28.213990 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:10:28.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.215742 systemd[1]: Starting modprobe@loop.service... Sep 13 00:10:28.216362 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.216475 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:10:28.217233 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:10:28.217383 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:10:28.218430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:10:28.218556 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:10:28.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.219596 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:10:28.219750 systemd[1]: Finished modprobe@loop.service. Sep 13 00:10:28.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.220759 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:10:28.220872 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.304950 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:10:28.309499 systemd[1]: Finished ldconfig.service. Sep 13 00:10:28.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.435734 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:10:28.437556 systemd[1]: Mounting boot.mount... Sep 13 00:10:28.439485 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:10:28.444418 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:10:28.446310 systemd[1]: Finished systemd-sysext.service. Sep 13 00:10:28.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.448340 systemd[1]: Starting ensure-sysext.service... Sep 13 00:10:28.450132 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:10:28.454116 systemd[1]: Mounted boot.mount. Sep 13 00:10:28.456184 systemd[1]: Reloading. Sep 13 00:10:28.460915 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:10:28.462146 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:10:28.463915 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:10:28.496225 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-09-13T00:10:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:10:28.496281 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-09-13T00:10:28Z" level=info msg="torcx already run" Sep 13 00:10:28.554295 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:10:28.554315 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:10:28.570180 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:28.616367 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:10:28.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.618183 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:10:28.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.621188 systemd[1]: Starting audit-rules.service... Sep 13 00:10:28.622932 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:10:28.624788 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:10:28.627315 systemd[1]: Starting systemd-resolved.service... Sep 13 00:10:28.629274 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:10:28.631489 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:10:28.632824 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:10:28.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.635000 audit[1245]: SYSTEM_BOOT pid=1245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.635739 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:10:28.638685 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.639877 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:10:28.642012 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:10:28.644876 systemd[1]: Starting modprobe@loop.service... Sep 13 00:10:28.645685 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.645834 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:10:28.645948 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:10:28.647055 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:10:28.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.648395 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:10:28.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.649562 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:10:28.649698 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:10:28.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.650848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:10:28.650990 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:10:28.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.654365 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:10:28.654547 systemd[1]: Finished modprobe@loop.service. Sep 13 00:10:28.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.655521 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.656653 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:10:28.658362 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:10:28.658979 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.659095 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:10:28.660560 systemd[1]: Starting systemd-update-done.service... Sep 13 00:10:28.661321 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:10:28.662961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:10:28.663118 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:10:28.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.665703 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:10:28.666553 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:10:28.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.668809 systemd[1]: Finished systemd-update-done.service. Sep 13 00:10:28.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.670094 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:10:28.670189 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.672584 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.673788 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:10:28.676268 systemd[1]: Starting modprobe@drm.service... Sep 13 00:10:28.677962 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:10:28.679715 systemd[1]: Starting modprobe@loop.service... Sep 13 00:10:28.680397 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.680528 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:10:28.681805 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:10:28.682616 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:10:28.683734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:10:28.683914 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:10:28.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.685776 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:10:28.685935 systemd[1]: Finished modprobe@drm.service. Sep 13 00:10:28.687715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:10:28.687876 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:10:28.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:10:28.692067 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:10:28.692273 systemd[1]: Finished modprobe@loop.service. Sep 13 00:10:28.692000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:10:28.692000 audit[1278]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe2d8c5e0 a2=420 a3=0 items=0 ppid=1233 pid=1278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:10:28.692000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:10:28.692641 augenrules[1278]: No rules Sep 13 00:10:28.693453 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:10:28.693539 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.694565 systemd[1]: Finished ensure-sysext.service. Sep 13 00:10:28.695923 systemd[1]: Finished audit-rules.service. Sep 13 00:10:28.702299 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:10:28.703104 systemd[1]: Reached target time-set.target. Sep 13 00:10:28.704136 systemd-timesyncd[1242]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:10:28.704190 systemd-timesyncd[1242]: Initial clock synchronization to Sat 2025-09-13 00:10:28.677731 UTC. Sep 13 00:10:28.713836 systemd-resolved[1241]: Positive Trust Anchors: Sep 13 00:10:28.713849 systemd-resolved[1241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:10:28.713875 systemd-resolved[1241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:10:28.722426 systemd-resolved[1241]: Defaulting to hostname 'linux'. Sep 13 00:10:28.723857 systemd[1]: Started systemd-resolved.service. Sep 13 00:10:28.724703 systemd[1]: Reached target network.target. Sep 13 00:10:28.725428 systemd[1]: Reached target nss-lookup.target. Sep 13 00:10:28.726140 systemd[1]: Reached target sysinit.target. Sep 13 00:10:28.726959 systemd[1]: Started motdgen.path. Sep 13 00:10:28.727628 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:10:28.728810 systemd[1]: Started logrotate.timer. Sep 13 00:10:28.729545 systemd[1]: Started mdadm.timer. Sep 13 00:10:28.730221 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:10:28.730975 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:10:28.731008 systemd[1]: Reached target paths.target. Sep 13 00:10:28.731736 systemd[1]: Reached target timers.target. Sep 13 00:10:28.732740 systemd[1]: Listening on dbus.socket. Sep 13 00:10:28.734555 systemd[1]: Starting docker.socket... Sep 13 00:10:28.737299 systemd[1]: Listening on sshd.socket. Sep 13 00:10:28.737946 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:10:28.738268 systemd[1]: Listening on docker.socket. Sep 13 00:10:28.738908 systemd[1]: Reached target sockets.target. Sep 13 00:10:28.739518 systemd[1]: Reached target basic.target. Sep 13 00:10:28.740180 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:10:28.740238 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.740261 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:10:28.741250 systemd[1]: Starting containerd.service... Sep 13 00:10:28.742890 systemd[1]: Starting dbus.service... Sep 13 00:10:28.744464 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:10:28.746153 systemd[1]: Starting extend-filesystems.service... Sep 13 00:10:28.746952 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:10:28.748078 systemd[1]: Starting motdgen.service... Sep 13 00:10:28.749377 jq[1293]: false Sep 13 00:10:28.751082 systemd[1]: Starting prepare-helm.service... Sep 13 00:10:28.753009 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:10:28.756887 systemd[1]: Starting sshd-keygen.service... Sep 13 00:10:28.759973 systemd[1]: Starting systemd-logind.service... Sep 13 00:10:28.760589 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:10:28.760665 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:10:28.761796 systemd[1]: Starting update-engine.service... Sep 13 00:10:28.763523 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:10:28.766169 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:10:28.766484 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:10:28.767347 extend-filesystems[1294]: Found loop1 Sep 13 00:10:28.767347 extend-filesystems[1294]: Found vda Sep 13 00:10:28.767347 extend-filesystems[1294]: Found vda1 Sep 13 00:10:28.767347 extend-filesystems[1294]: Found vda2 Sep 13 00:10:28.774813 jq[1313]: true Sep 13 00:10:28.774946 extend-filesystems[1294]: Found vda3 Sep 13 00:10:28.774946 extend-filesystems[1294]: Found usr Sep 13 00:10:28.774946 extend-filesystems[1294]: Found vda4 Sep 13 00:10:28.774946 extend-filesystems[1294]: Found vda6 Sep 13 00:10:28.774946 extend-filesystems[1294]: Found vda7 Sep 13 00:10:28.774946 extend-filesystems[1294]: Found vda9 Sep 13 00:10:28.774946 extend-filesystems[1294]: Checking size of /dev/vda9 Sep 13 00:10:28.767614 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:10:28.767838 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:10:28.785967 tar[1317]: linux-arm64/helm Sep 13 00:10:28.770102 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:10:28.786275 jq[1320]: true Sep 13 00:10:28.773481 systemd[1]: Finished motdgen.service. Sep 13 00:10:28.791032 dbus-daemon[1292]: [system] SELinux support is enabled Sep 13 00:10:28.791238 systemd[1]: Started dbus.service. Sep 13 00:10:28.793700 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:10:28.793726 systemd[1]: Reached target system-config.target. Sep 13 00:10:28.794532 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:10:28.794556 systemd[1]: Reached target user-config.target. Sep 13 00:10:28.824626 bash[1348]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:10:28.825294 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:10:28.825445 extend-filesystems[1294]: Resized partition /dev/vda9 Sep 13 00:10:28.828891 extend-filesystems[1351]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:10:28.835364 systemd-logind[1308]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:10:28.837236 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:10:28.838285 systemd-logind[1308]: New seat seat0. Sep 13 00:10:28.843195 systemd[1]: Started systemd-logind.service. Sep 13 00:10:28.848836 update_engine[1312]: I0913 00:10:28.846407 1312 main.cc:92] Flatcar Update Engine starting Sep 13 00:10:28.849688 env[1324]: time="2025-09-13T00:10:28.849648480Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:10:28.851305 systemd[1]: Started update-engine.service. Sep 13 00:10:28.851395 update_engine[1312]: I0913 00:10:28.851362 1312 update_check_scheduler.cc:74] Next update check in 10m14s Sep 13 00:10:28.853764 systemd[1]: Started locksmithd.service. Sep 13 00:10:28.863308 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:10:28.870321 env[1324]: time="2025-09-13T00:10:28.870279360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:10:28.878828 env[1324]: time="2025-09-13T00:10:28.875925240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:10:28.878828 env[1324]: time="2025-09-13T00:10:28.877033200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:10:28.878828 env[1324]: time="2025-09-13T00:10:28.877059360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:10:28.878828 env[1324]: time="2025-09-13T00:10:28.877324040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:10:28.878828 env[1324]: time="2025-09-13T00:10:28.877342400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:10:28.878828 env[1324]: time="2025-09-13T00:10:28.877355640Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:10:28.878828 env[1324]: time="2025-09-13T00:10:28.877364360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:10:28.878828 env[1324]: time="2025-09-13T00:10:28.877487640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:10:28.878828 env[1324]: time="2025-09-13T00:10:28.877761320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:10:28.878828 env[1324]: time="2025-09-13T00:10:28.877938520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:10:28.877636 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:10:28.879137 extend-filesystems[1351]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:10:28.879137 extend-filesystems[1351]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:10:28.879137 extend-filesystems[1351]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.877955400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.878009240Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.878024280Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.883072680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.883098360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.883112000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.883164920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.883180560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.883211000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.883224320Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.883734720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.883757800Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.883786280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:10:28.889584 env[1324]: time="2025-09-13T00:10:28.883802080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:10:28.877894 systemd[1]: Finished extend-filesystems.service. Sep 13 00:10:28.889915 extend-filesystems[1294]: Resized filesystem in /dev/vda9 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.883815960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.883937000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.884026040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.884573120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.884606560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.884792000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.884953840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.884975880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.884988040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.885000640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.885062360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.885175480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.885201200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:10:28.891246 env[1324]: time="2025-09-13T00:10:28.885217400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:10:28.889731 systemd[1]: Started containerd.service. Sep 13 00:10:28.891587 env[1324]: time="2025-09-13T00:10:28.885230960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:10:28.891587 env[1324]: time="2025-09-13T00:10:28.885351520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:10:28.891587 env[1324]: time="2025-09-13T00:10:28.885367360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:10:28.891587 env[1324]: time="2025-09-13T00:10:28.885379120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:10:28.891587 env[1324]: time="2025-09-13T00:10:28.885390400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:10:28.891587 env[1324]: time="2025-09-13T00:10:28.885415720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:10:28.891587 env[1324]: time="2025-09-13T00:10:28.885427680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:10:28.891587 env[1324]: time="2025-09-13T00:10:28.885443840Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:10:28.891587 env[1324]: time="2025-09-13T00:10:28.885475400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:10:28.891750 env[1324]: time="2025-09-13T00:10:28.885654720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:10:28.891750 env[1324]: time="2025-09-13T00:10:28.885705440Z" level=info msg="Connect containerd service" Sep 13 00:10:28.891750 env[1324]: time="2025-09-13T00:10:28.885732640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:10:28.891750 env[1324]: time="2025-09-13T00:10:28.886319280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:10:28.891750 env[1324]: time="2025-09-13T00:10:28.886490800Z" level=info msg="Start subscribing containerd event" Sep 13 00:10:28.891750 env[1324]: time="2025-09-13T00:10:28.886525240Z" level=info msg="Start recovering state" Sep 13 00:10:28.891750 env[1324]: time="2025-09-13T00:10:28.886576680Z" level=info msg="Start event monitor" Sep 13 00:10:28.891750 env[1324]: time="2025-09-13T00:10:28.886593360Z" level=info msg="Start snapshots syncer" Sep 13 00:10:28.891750 env[1324]: time="2025-09-13T00:10:28.886603080Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:10:28.891750 env[1324]: time="2025-09-13T00:10:28.886610400Z" level=info msg="Start streaming server" Sep 13 00:10:28.891750 env[1324]: time="2025-09-13T00:10:28.886973560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:10:28.891750 env[1324]: time="2025-09-13T00:10:28.887020760Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:10:28.891750 env[1324]: time="2025-09-13T00:10:28.889665640Z" level=info msg="containerd successfully booted in 0.040917s" Sep 13 00:10:28.913285 locksmithd[1355]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:10:29.140374 systemd-networkd[1097]: eth0: Gained IPv6LL Sep 13 00:10:29.142090 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:10:29.143303 systemd[1]: Reached target network-online.target. Sep 13 00:10:29.145631 systemd[1]: Starting kubelet.service... Sep 13 00:10:29.169066 tar[1317]: linux-arm64/LICENSE Sep 13 00:10:29.169162 tar[1317]: linux-arm64/README.md Sep 13 00:10:29.173380 systemd[1]: Finished prepare-helm.service. Sep 13 00:10:29.801798 systemd[1]: Started kubelet.service. Sep 13 00:10:30.207131 kubelet[1378]: E0913 00:10:30.207087 1378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:30.208217 sshd_keygen[1316]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:10:30.209414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:30.209553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:30.226413 systemd[1]: Finished sshd-keygen.service. Sep 13 00:10:30.228525 systemd[1]: Starting issuegen.service... Sep 13 00:10:30.233083 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:10:30.233419 systemd[1]: Finished issuegen.service. Sep 13 00:10:30.235373 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:10:30.243557 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:10:30.245932 systemd[1]: Started getty@tty1.service. Sep 13 00:10:30.247916 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 13 00:10:30.248972 systemd[1]: Reached target getty.target. Sep 13 00:10:30.249816 systemd[1]: Reached target multi-user.target. Sep 13 00:10:30.251848 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:10:30.258042 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:10:30.258311 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:10:30.259309 systemd[1]: Startup finished in 5.388s (kernel) + 4.850s (userspace) = 10.239s. Sep 13 00:10:33.214142 systemd[1]: Created slice system-sshd.slice. Sep 13 00:10:33.215348 systemd[1]: Started sshd@0-10.0.0.44:22-10.0.0.1:41732.service. Sep 13 00:10:33.263644 sshd[1404]: Accepted publickey for core from 10.0.0.1 port 41732 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:10:33.267491 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:10:33.280880 systemd[1]: Created slice user-500.slice. Sep 13 00:10:33.281868 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:10:33.287273 systemd-logind[1308]: New session 1 of user core. Sep 13 00:10:33.292073 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:10:33.293210 systemd[1]: Starting user@500.service... Sep 13 00:10:33.298277 (systemd)[1407]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:10:33.366139 systemd[1407]: Queued start job for default target default.target. Sep 13 00:10:33.366415 systemd[1407]: Reached target paths.target. Sep 13 00:10:33.366431 systemd[1407]: Reached target sockets.target. Sep 13 00:10:33.366442 systemd[1407]: Reached target timers.target. Sep 13 00:10:33.366452 systemd[1407]: Reached target basic.target. Sep 13 00:10:33.366499 systemd[1407]: Reached target default.target. Sep 13 00:10:33.366523 systemd[1407]: Startup finished in 62ms. Sep 13 00:10:33.366818 systemd[1]: Started user@500.service. Sep 13 00:10:33.367844 systemd[1]: Started session-1.scope. Sep 13 00:10:33.419105 systemd[1]: Started sshd@1-10.0.0.44:22-10.0.0.1:41736.service. Sep 13 00:10:33.456798 sshd[1418]: Accepted publickey for core from 10.0.0.1 port 41736 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:10:33.458003 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:10:33.461684 systemd-logind[1308]: New session 2 of user core. Sep 13 00:10:33.462481 systemd[1]: Started session-2.scope. Sep 13 00:10:33.515565 sshd[1418]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:33.518318 systemd[1]: Started sshd@2-10.0.0.44:22-10.0.0.1:41742.service. Sep 13 00:10:33.518787 systemd[1]: sshd@1-10.0.0.44:22-10.0.0.1:41736.service: Deactivated successfully. Sep 13 00:10:33.519806 systemd-logind[1308]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:10:33.519874 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:10:33.520762 systemd-logind[1308]: Removed session 2. Sep 13 00:10:33.554876 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 41742 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:10:33.556387 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:10:33.559670 systemd-logind[1308]: New session 3 of user core. Sep 13 00:10:33.560538 systemd[1]: Started session-3.scope. Sep 13 00:10:33.609685 sshd[1424]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:33.612591 systemd[1]: Started sshd@3-10.0.0.44:22-10.0.0.1:41754.service. Sep 13 00:10:33.613060 systemd[1]: sshd@2-10.0.0.44:22-10.0.0.1:41742.service: Deactivated successfully. Sep 13 00:10:33.614056 systemd-logind[1308]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:10:33.614104 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:10:33.615080 systemd-logind[1308]: Removed session 3. Sep 13 00:10:33.649108 sshd[1431]: Accepted publickey for core from 10.0.0.1 port 41754 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:10:33.650317 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:10:33.653666 systemd-logind[1308]: New session 4 of user core. Sep 13 00:10:33.654519 systemd[1]: Started session-4.scope. Sep 13 00:10:33.708005 sshd[1431]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:33.710251 systemd[1]: Started sshd@4-10.0.0.44:22-10.0.0.1:41770.service. Sep 13 00:10:33.710722 systemd[1]: sshd@3-10.0.0.44:22-10.0.0.1:41754.service: Deactivated successfully. Sep 13 00:10:33.711719 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:10:33.711893 systemd-logind[1308]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:10:33.715538 systemd-logind[1308]: Removed session 4. Sep 13 00:10:33.745334 sshd[1437]: Accepted publickey for core from 10.0.0.1 port 41770 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:10:33.746906 sshd[1437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:10:33.750281 systemd-logind[1308]: New session 5 of user core. Sep 13 00:10:33.751126 systemd[1]: Started session-5.scope. Sep 13 00:10:33.807580 sudo[1443]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:10:33.807820 sudo[1443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:10:33.845849 systemd[1]: Starting docker.service... Sep 13 00:10:33.904138 env[1455]: time="2025-09-13T00:10:33.904078104Z" level=info msg="Starting up" Sep 13 00:10:33.907604 env[1455]: time="2025-09-13T00:10:33.907569180Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:10:33.907604 env[1455]: time="2025-09-13T00:10:33.907601739Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:10:33.907673 env[1455]: time="2025-09-13T00:10:33.907623431Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:10:33.907673 env[1455]: time="2025-09-13T00:10:33.907633978Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:10:33.909826 env[1455]: time="2025-09-13T00:10:33.909800939Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:10:33.909920 env[1455]: time="2025-09-13T00:10:33.909906765Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:10:33.909988 env[1455]: time="2025-09-13T00:10:33.909966209Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:10:33.910050 env[1455]: time="2025-09-13T00:10:33.910035481Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:10:33.915284 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport279310237-merged.mount: Deactivated successfully. Sep 13 00:10:34.104141 env[1455]: time="2025-09-13T00:10:34.104046103Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:10:34.104141 env[1455]: time="2025-09-13T00:10:34.104073190Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:10:34.104340 env[1455]: time="2025-09-13T00:10:34.104222412Z" level=info msg="Loading containers: start." Sep 13 00:10:34.231222 kernel: Initializing XFRM netlink socket Sep 13 00:10:34.254390 env[1455]: time="2025-09-13T00:10:34.254354615Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:10:34.308972 systemd-networkd[1097]: docker0: Link UP Sep 13 00:10:34.328632 env[1455]: time="2025-09-13T00:10:34.328587311Z" level=info msg="Loading containers: done." Sep 13 00:10:34.345755 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck900905212-merged.mount: Deactivated successfully. Sep 13 00:10:34.349278 env[1455]: time="2025-09-13T00:10:34.349241544Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:10:34.349576 env[1455]: time="2025-09-13T00:10:34.349546819Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:10:34.349746 env[1455]: time="2025-09-13T00:10:34.349726685Z" level=info msg="Daemon has completed initialization" Sep 13 00:10:34.362991 systemd[1]: Started docker.service. Sep 13 00:10:34.370928 env[1455]: time="2025-09-13T00:10:34.370883638Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:10:34.916511 env[1324]: time="2025-09-13T00:10:34.916470014Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:10:35.487118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3741979860.mount: Deactivated successfully. Sep 13 00:10:36.664716 env[1324]: time="2025-09-13T00:10:36.664669120Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:36.665937 env[1324]: time="2025-09-13T00:10:36.665909219Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:36.668584 env[1324]: time="2025-09-13T00:10:36.668553327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:36.669773 env[1324]: time="2025-09-13T00:10:36.669738724Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:36.670673 env[1324]: time="2025-09-13T00:10:36.670633305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 00:10:36.672519 env[1324]: time="2025-09-13T00:10:36.672492715Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:10:38.057499 env[1324]: time="2025-09-13T00:10:38.057440231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:38.058873 env[1324]: time="2025-09-13T00:10:38.058822038Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:38.060659 env[1324]: time="2025-09-13T00:10:38.060624457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:38.063348 env[1324]: time="2025-09-13T00:10:38.063315217Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:38.063680 env[1324]: time="2025-09-13T00:10:38.063656023Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 00:10:38.064160 env[1324]: time="2025-09-13T00:10:38.064137659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:10:39.301439 env[1324]: time="2025-09-13T00:10:39.301392350Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:39.303239 env[1324]: time="2025-09-13T00:10:39.303211739Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:39.304991 env[1324]: time="2025-09-13T00:10:39.304965504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:39.306656 env[1324]: time="2025-09-13T00:10:39.306626029Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:39.308293 env[1324]: time="2025-09-13T00:10:39.308262455Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 00:10:39.308790 env[1324]: time="2025-09-13T00:10:39.308763103Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:10:40.284189 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:10:40.284373 systemd[1]: Stopped kubelet.service. Sep 13 00:10:40.285877 systemd[1]: Starting kubelet.service... Sep 13 00:10:40.376450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1049644759.mount: Deactivated successfully. Sep 13 00:10:40.393848 systemd[1]: Started kubelet.service. Sep 13 00:10:40.501728 kubelet[1592]: E0913 00:10:40.501689 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:40.504163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:40.504313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:40.932304 env[1324]: time="2025-09-13T00:10:40.932216847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:40.934183 env[1324]: time="2025-09-13T00:10:40.934144366Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:40.937104 env[1324]: time="2025-09-13T00:10:40.937060045Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:40.938937 env[1324]: time="2025-09-13T00:10:40.938904191Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:40.939236 env[1324]: time="2025-09-13T00:10:40.939175971Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 00:10:40.939894 env[1324]: time="2025-09-13T00:10:40.939707420Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:10:41.548978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416909130.mount: Deactivated successfully. Sep 13 00:10:42.518059 env[1324]: time="2025-09-13T00:10:42.518010883Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:42.519391 env[1324]: time="2025-09-13T00:10:42.519344934Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:42.521212 env[1324]: time="2025-09-13T00:10:42.521171793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:42.523225 env[1324]: time="2025-09-13T00:10:42.523200350Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:42.524140 env[1324]: time="2025-09-13T00:10:42.524108663Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 00:10:42.525166 env[1324]: time="2025-09-13T00:10:42.525140529Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:10:42.950316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779473961.mount: Deactivated successfully. Sep 13 00:10:42.954128 env[1324]: time="2025-09-13T00:10:42.954077277Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:42.955724 env[1324]: time="2025-09-13T00:10:42.955691209Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:42.957171 env[1324]: time="2025-09-13T00:10:42.957141336Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:42.959809 env[1324]: time="2025-09-13T00:10:42.959759993Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:42.960619 env[1324]: time="2025-09-13T00:10:42.960579409Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:10:42.962164 env[1324]: time="2025-09-13T00:10:42.962137301Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:10:43.372367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2993443169.mount: Deactivated successfully. Sep 13 00:10:45.655643 env[1324]: time="2025-09-13T00:10:45.655599623Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:45.659791 env[1324]: time="2025-09-13T00:10:45.659763222Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:45.662366 env[1324]: time="2025-09-13T00:10:45.662340470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:45.664799 env[1324]: time="2025-09-13T00:10:45.664763969Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:45.665724 env[1324]: time="2025-09-13T00:10:45.665693584Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 00:10:50.534228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:10:50.534392 systemd[1]: Stopped kubelet.service. Sep 13 00:10:50.536867 systemd[1]: Starting kubelet.service... Sep 13 00:10:50.646669 systemd[1]: Started kubelet.service. Sep 13 00:10:50.690525 kubelet[1631]: E0913 00:10:50.690485 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:50.693994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:50.694128 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:51.903021 systemd[1]: Stopped kubelet.service. Sep 13 00:10:51.904990 systemd[1]: Starting kubelet.service... Sep 13 00:10:51.927254 systemd[1]: Reloading. Sep 13 00:10:51.975878 /usr/lib/systemd/system-generators/torcx-generator[1667]: time="2025-09-13T00:10:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:10:51.975912 /usr/lib/systemd/system-generators/torcx-generator[1667]: time="2025-09-13T00:10:51Z" level=info msg="torcx already run" Sep 13 00:10:52.056185 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:10:52.056216 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:10:52.071424 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:52.133076 systemd[1]: Started kubelet.service. Sep 13 00:10:52.135003 systemd[1]: Stopping kubelet.service... Sep 13 00:10:52.135509 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:10:52.135737 systemd[1]: Stopped kubelet.service. Sep 13 00:10:52.137212 systemd[1]: Starting kubelet.service... Sep 13 00:10:52.228133 systemd[1]: Started kubelet.service. Sep 13 00:10:52.264670 kubelet[1726]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:52.264670 kubelet[1726]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:10:52.264670 kubelet[1726]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:52.264990 kubelet[1726]: I0913 00:10:52.264732 1726 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:10:52.841227 kubelet[1726]: I0913 00:10:52.841179 1726 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:10:52.841364 kubelet[1726]: I0913 00:10:52.841352 1726 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:10:52.841664 kubelet[1726]: I0913 00:10:52.841646 1726 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:10:52.862745 kubelet[1726]: I0913 00:10:52.862640 1726 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:10:52.864485 kubelet[1726]: E0913 00:10:52.864441 1726 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:52.869384 kubelet[1726]: E0913 00:10:52.869333 1726 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:10:52.869384 kubelet[1726]: I0913 00:10:52.869370 1726 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:10:52.873386 kubelet[1726]: I0913 00:10:52.873355 1726 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:10:52.873907 kubelet[1726]: I0913 00:10:52.873873 1726 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:10:52.874059 kubelet[1726]: I0913 00:10:52.874020 1726 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:10:52.874239 kubelet[1726]: I0913 00:10:52.874054 1726 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:10:52.874325 kubelet[1726]: I0913 00:10:52.874314 1726 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:10:52.874359 kubelet[1726]: I0913 00:10:52.874327 1726 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:10:52.874600 kubelet[1726]: I0913 00:10:52.874577 1726 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:52.878248 kubelet[1726]: I0913 00:10:52.878187 1726 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:10:52.878248 kubelet[1726]: I0913 00:10:52.878229 1726 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:10:52.878347 kubelet[1726]: I0913 00:10:52.878266 1726 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:10:52.878347 kubelet[1726]: I0913 00:10:52.878284 1726 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:10:52.898657 kubelet[1726]: W0913 00:10:52.898588 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 13 00:10:52.898762 kubelet[1726]: E0913 00:10:52.898665 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:52.901558 kubelet[1726]: I0913 00:10:52.901459 1726 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:10:52.902513 kubelet[1726]: I0913 00:10:52.902485 1726 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:10:52.902660 kubelet[1726]: W0913 00:10:52.902648 1726 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:10:52.904205 kubelet[1726]: I0913 00:10:52.904167 1726 server.go:1274] "Started kubelet" Sep 13 00:10:52.906039 kubelet[1726]: I0913 00:10:52.905990 1726 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:10:52.906388 kubelet[1726]: W0913 00:10:52.906236 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 13 00:10:52.906388 kubelet[1726]: E0913 00:10:52.906288 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:52.906681 kubelet[1726]: I0913 00:10:52.906562 1726 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:10:52.906864 kubelet[1726]: I0913 00:10:52.906806 1726 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:10:52.908125 kubelet[1726]: I0913 00:10:52.908106 1726 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:10:52.909522 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:10:52.909689 kubelet[1726]: I0913 00:10:52.909669 1726 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:10:52.913832 kubelet[1726]: I0913 00:10:52.913737 1726 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:10:52.915210 kubelet[1726]: I0913 00:10:52.915185 1726 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:10:52.915437 kubelet[1726]: I0913 00:10:52.915421 1726 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:10:52.915743 kubelet[1726]: I0913 00:10:52.915728 1726 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:10:52.916275 kubelet[1726]: W0913 00:10:52.916239 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 13 00:10:52.916403 kubelet[1726]: E0913 00:10:52.916376 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:52.920650 kubelet[1726]: E0913 00:10:52.920424 1726 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:10:52.922932 kubelet[1726]: E0913 00:10:52.922900 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:52.923342 kubelet[1726]: E0913 00:10:52.923310 1726 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="200ms" Sep 13 00:10:52.923457 kubelet[1726]: I0913 00:10:52.923427 1726 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:10:52.923457 kubelet[1726]: I0913 00:10:52.923449 1726 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:10:52.923555 kubelet[1726]: I0913 00:10:52.923537 1726 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:10:52.926050 kubelet[1726]: E0913 00:10:52.924928 1726 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864af13822697a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:10:52.904142756 +0000 UTC m=+0.672472026,LastTimestamp:2025-09-13 00:10:52.904142756 +0000 UTC m=+0.672472026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:10:52.944045 kubelet[1726]: I0913 00:10:52.944005 1726 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:10:52.944798 kubelet[1726]: I0913 00:10:52.944780 1726 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:10:52.944894 kubelet[1726]: I0913 00:10:52.944882 1726 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:10:52.944952 kubelet[1726]: I0913 00:10:52.944943 1726 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:52.946261 kubelet[1726]: I0913 00:10:52.946167 1726 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:10:52.946261 kubelet[1726]: I0913 00:10:52.946222 1726 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:10:52.946361 kubelet[1726]: I0913 00:10:52.946297 1726 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:10:52.947547 kubelet[1726]: W0913 00:10:52.946831 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 13 00:10:52.947676 kubelet[1726]: E0913 00:10:52.947655 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:52.947736 kubelet[1726]: E0913 00:10:52.947552 1726 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:10:53.023262 kubelet[1726]: E0913 00:10:53.023220 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:53.029448 kubelet[1726]: I0913 00:10:53.029425 1726 policy_none.go:49] "None policy: Start" Sep 13 00:10:53.030187 kubelet[1726]: I0913 00:10:53.030171 1726 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:10:53.030312 kubelet[1726]: I0913 00:10:53.030300 1726 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:10:53.036754 kubelet[1726]: I0913 00:10:53.036732 1726 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:10:53.037019 kubelet[1726]: I0913 00:10:53.037006 1726 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:10:53.037183 kubelet[1726]: I0913 00:10:53.037151 1726 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:10:53.038134 kubelet[1726]: I0913 00:10:53.038105 1726 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:10:53.038871 kubelet[1726]: E0913 00:10:53.038853 1726 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:10:53.117723 kubelet[1726]: I0913 00:10:53.117622 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:53.117723 kubelet[1726]: I0913 00:10:53.117659 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:53.117723 kubelet[1726]: I0913 00:10:53.117692 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:53.118536 kubelet[1726]: I0913 00:10:53.117710 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:53.118862 kubelet[1726]: I0913 00:10:53.118842 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:53.119081 kubelet[1726]: I0913 00:10:53.119065 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:10:53.119177 kubelet[1726]: I0913 00:10:53.119162 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b565be16af10f6d951871a7d58fc3ee-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b565be16af10f6d951871a7d58fc3ee\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:10:53.119280 kubelet[1726]: I0913 00:10:53.119266 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b565be16af10f6d951871a7d58fc3ee-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b565be16af10f6d951871a7d58fc3ee\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:10:53.119365 kubelet[1726]: I0913 00:10:53.119351 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b565be16af10f6d951871a7d58fc3ee-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8b565be16af10f6d951871a7d58fc3ee\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:10:53.124005 kubelet[1726]: E0913 00:10:53.123971 1726 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="400ms" Sep 13 00:10:53.138827 kubelet[1726]: I0913 00:10:53.138802 1726 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:10:53.139367 kubelet[1726]: E0913 00:10:53.139340 1726 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Sep 13 00:10:53.342526 kubelet[1726]: I0913 00:10:53.342429 1726 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:10:53.343025 kubelet[1726]: E0913 00:10:53.342993 1726 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Sep 13 00:10:53.353323 kubelet[1726]: E0913 00:10:53.353301 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:53.353884 kubelet[1726]: E0913 00:10:53.353861 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:53.354097 env[1324]: time="2025-09-13T00:10:53.354052590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8b565be16af10f6d951871a7d58fc3ee,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:53.354676 env[1324]: time="2025-09-13T00:10:53.354630507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:53.354896 kubelet[1726]: E0913 00:10:53.354877 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:53.355269 env[1324]: time="2025-09-13T00:10:53.355232017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:53.525367 kubelet[1726]: E0913 00:10:53.525266 1726 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="800ms" Sep 13 00:10:53.744377 kubelet[1726]: I0913 00:10:53.744336 1726 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:10:53.744696 kubelet[1726]: E0913 00:10:53.744662 1726 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Sep 13 00:10:53.822110 kubelet[1726]: W0913 00:10:53.822068 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 13 00:10:53.822279 kubelet[1726]: E0913 00:10:53.822117 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:53.861942 kubelet[1726]: E0913 00:10:53.861356 1726 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864af13822697a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:10:52.904142756 +0000 UTC m=+0.672472026,LastTimestamp:2025-09-13 00:10:52.904142756 +0000 UTC m=+0.672472026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:10:53.871232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3918940326.mount: Deactivated successfully. Sep 13 00:10:53.879784 env[1324]: time="2025-09-13T00:10:53.878935174Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:53.887300 env[1324]: time="2025-09-13T00:10:53.886552549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:53.890072 env[1324]: time="2025-09-13T00:10:53.890045647Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:53.895000 env[1324]: time="2025-09-13T00:10:53.894256694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:53.896414 env[1324]: time="2025-09-13T00:10:53.896303778Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:53.898425 env[1324]: time="2025-09-13T00:10:53.898344944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:53.903300 env[1324]: time="2025-09-13T00:10:53.902037052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:53.904877 env[1324]: time="2025-09-13T00:10:53.904843430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:53.909059 env[1324]: time="2025-09-13T00:10:53.909032965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:53.912889 env[1324]: time="2025-09-13T00:10:53.912860786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:53.916754 env[1324]: time="2025-09-13T00:10:53.916728473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:53.919840 env[1324]: time="2025-09-13T00:10:53.919777126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:10:53.949497 env[1324]: time="2025-09-13T00:10:53.942903716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:53.949497 env[1324]: time="2025-09-13T00:10:53.942946861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:53.949497 env[1324]: time="2025-09-13T00:10:53.942956857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:53.949497 env[1324]: time="2025-09-13T00:10:53.943242237Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84e43783744bbacb9ec5a2ba7c7e18a56b6294908eb3207027c9e0d2657b605e pid=1769 runtime=io.containerd.runc.v2 Sep 13 00:10:53.957876 env[1324]: time="2025-09-13T00:10:53.957334028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:53.957876 env[1324]: time="2025-09-13T00:10:53.957371895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:53.957876 env[1324]: time="2025-09-13T00:10:53.957383251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:53.958988 env[1324]: time="2025-09-13T00:10:53.958341355Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc6c22d82991e5bbf6e70cd50579271d4f0902a19b0b6b7a9e6bcd2cb86912cf pid=1787 runtime=io.containerd.runc.v2 Sep 13 00:10:53.978589 env[1324]: time="2025-09-13T00:10:53.972168039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:53.978589 env[1324]: time="2025-09-13T00:10:53.972232896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:53.978589 env[1324]: time="2025-09-13T00:10:53.972243972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:53.978589 env[1324]: time="2025-09-13T00:10:53.974694315Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70233d67a9397b77d5026d4fce25b7c28b628a2fc383634cd9b0d13884677b37 pid=1823 runtime=io.containerd.runc.v2 Sep 13 00:10:54.009270 env[1324]: time="2025-09-13T00:10:54.009184944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"84e43783744bbacb9ec5a2ba7c7e18a56b6294908eb3207027c9e0d2657b605e\"" Sep 13 00:10:54.010053 kubelet[1726]: W0913 00:10:54.009953 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 13 00:10:54.010053 kubelet[1726]: E0913 00:10:54.010022 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:54.010600 kubelet[1726]: E0913 00:10:54.010563 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:54.012606 env[1324]: time="2025-09-13T00:10:54.012567035Z" level=info msg="CreateContainer within sandbox \"84e43783744bbacb9ec5a2ba7c7e18a56b6294908eb3207027c9e0d2657b605e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:10:54.016446 kubelet[1726]: W0913 00:10:54.016387 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 13 00:10:54.016586 kubelet[1726]: E0913 00:10:54.016467 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:54.019615 env[1324]: time="2025-09-13T00:10:54.019576256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc6c22d82991e5bbf6e70cd50579271d4f0902a19b0b6b7a9e6bcd2cb86912cf\"" Sep 13 00:10:54.020549 kubelet[1726]: E0913 00:10:54.020519 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:54.027646 env[1324]: time="2025-09-13T00:10:54.027610221Z" level=info msg="CreateContainer within sandbox \"dc6c22d82991e5bbf6e70cd50579271d4f0902a19b0b6b7a9e6bcd2cb86912cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:10:54.031081 env[1324]: time="2025-09-13T00:10:54.031051573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8b565be16af10f6d951871a7d58fc3ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"70233d67a9397b77d5026d4fce25b7c28b628a2fc383634cd9b0d13884677b37\"" Sep 13 00:10:54.031853 kubelet[1726]: E0913 00:10:54.031824 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:54.031931 env[1324]: time="2025-09-13T00:10:54.031906772Z" level=info msg="CreateContainer within sandbox \"84e43783744bbacb9ec5a2ba7c7e18a56b6294908eb3207027c9e0d2657b605e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"202d232a06cccf5e55b7fc4a1608cf9e66516bff7310c4bcbaa6279151ac4bfa\"" Sep 13 00:10:54.032912 env[1324]: time="2025-09-13T00:10:54.032880693Z" level=info msg="StartContainer for \"202d232a06cccf5e55b7fc4a1608cf9e66516bff7310c4bcbaa6279151ac4bfa\"" Sep 13 00:10:54.033700 env[1324]: time="2025-09-13T00:10:54.033666515Z" level=info msg="CreateContainer within sandbox \"70233d67a9397b77d5026d4fce25b7c28b628a2fc383634cd9b0d13884677b37\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:10:54.042152 env[1324]: time="2025-09-13T00:10:54.042111426Z" level=info msg="CreateContainer within sandbox \"dc6c22d82991e5bbf6e70cd50579271d4f0902a19b0b6b7a9e6bcd2cb86912cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"660cd375177d1b353a6ad0d4ac63807365c8565853702830a5089fd3742d3c61\"" Sep 13 00:10:54.042668 env[1324]: time="2025-09-13T00:10:54.042630135Z" level=info msg="StartContainer for \"660cd375177d1b353a6ad0d4ac63807365c8565853702830a5089fd3742d3c61\"" Sep 13 00:10:54.047492 env[1324]: time="2025-09-13T00:10:54.047423044Z" level=info msg="CreateContainer within sandbox \"70233d67a9397b77d5026d4fce25b7c28b628a2fc383634cd9b0d13884677b37\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"958c4f1d70e71c5385a5ba3ca618317167b63b83008eb839592bd41c0cceeac9\"" Sep 13 00:10:54.048067 env[1324]: time="2025-09-13T00:10:54.048026206Z" level=info msg="StartContainer for \"958c4f1d70e71c5385a5ba3ca618317167b63b83008eb839592bd41c0cceeac9\"" Sep 13 00:10:54.109073 env[1324]: time="2025-09-13T00:10:54.108738535Z" level=info msg="StartContainer for \"202d232a06cccf5e55b7fc4a1608cf9e66516bff7310c4bcbaa6279151ac4bfa\" returns successfully" Sep 13 00:10:54.121638 env[1324]: time="2025-09-13T00:10:54.121589041Z" level=info msg="StartContainer for \"958c4f1d70e71c5385a5ba3ca618317167b63b83008eb839592bd41c0cceeac9\" returns successfully" Sep 13 00:10:54.127225 env[1324]: time="2025-09-13T00:10:54.127168571Z" level=info msg="StartContainer for \"660cd375177d1b353a6ad0d4ac63807365c8565853702830a5089fd3742d3c61\" returns successfully" Sep 13 00:10:54.546704 kubelet[1726]: I0913 00:10:54.546115 1726 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:10:54.956637 kubelet[1726]: E0913 00:10:54.956536 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:54.957516 kubelet[1726]: E0913 00:10:54.957491 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:54.958784 kubelet[1726]: E0913 00:10:54.958757 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:55.938492 kubelet[1726]: E0913 00:10:55.938439 1726 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:10:55.960834 kubelet[1726]: E0913 00:10:55.960749 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:56.074565 kubelet[1726]: I0913 00:10:56.074507 1726 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:10:56.074565 kubelet[1726]: E0913 00:10:56.074553 1726 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:10:56.084860 kubelet[1726]: E0913 00:10:56.084795 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:56.185943 kubelet[1726]: E0913 00:10:56.185886 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:56.286497 kubelet[1726]: E0913 00:10:56.286436 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:56.386858 kubelet[1726]: E0913 00:10:56.386751 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:56.487208 kubelet[1726]: E0913 00:10:56.487129 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:56.587829 kubelet[1726]: E0913 00:10:56.587695 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:56.688413 kubelet[1726]: E0913 00:10:56.688354 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:56.900403 kubelet[1726]: I0913 00:10:56.900292 1726 apiserver.go:52] "Watching apiserver" Sep 13 00:10:56.916010 kubelet[1726]: I0913 00:10:56.915952 1726 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:10:57.083969 kubelet[1726]: E0913 00:10:57.083934 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:57.961974 kubelet[1726]: E0913 00:10:57.961923 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:58.170976 systemd[1]: Reloading. Sep 13 00:10:58.211590 /usr/lib/systemd/system-generators/torcx-generator[2026]: time="2025-09-13T00:10:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:10:58.211621 /usr/lib/systemd/system-generators/torcx-generator[2026]: time="2025-09-13T00:10:58Z" level=info msg="torcx already run" Sep 13 00:10:58.273823 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:10:58.273848 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:10:58.289005 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:58.357872 systemd[1]: Stopping kubelet.service... Sep 13 00:10:58.380616 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:10:58.380917 systemd[1]: Stopped kubelet.service. Sep 13 00:10:58.383054 systemd[1]: Starting kubelet.service... Sep 13 00:10:58.480316 systemd[1]: Started kubelet.service. Sep 13 00:10:58.519389 kubelet[2079]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:58.519389 kubelet[2079]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:10:58.519389 kubelet[2079]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:58.519753 kubelet[2079]: I0913 00:10:58.519656 2079 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:10:58.526704 kubelet[2079]: I0913 00:10:58.526286 2079 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:10:58.526844 kubelet[2079]: I0913 00:10:58.526828 2079 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:10:58.527160 kubelet[2079]: I0913 00:10:58.527139 2079 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:10:58.528604 kubelet[2079]: I0913 00:10:58.528581 2079 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:10:58.530659 kubelet[2079]: I0913 00:10:58.530619 2079 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:10:58.533326 kubelet[2079]: E0913 00:10:58.533282 2079 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:10:58.533397 kubelet[2079]: I0913 00:10:58.533327 2079 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:10:58.535933 kubelet[2079]: I0913 00:10:58.535912 2079 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:10:58.536391 kubelet[2079]: I0913 00:10:58.536378 2079 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:10:58.536505 kubelet[2079]: I0913 00:10:58.536479 2079 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:10:58.536658 kubelet[2079]: I0913 00:10:58.536507 2079 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:10:58.536742 kubelet[2079]: I0913 00:10:58.536666 2079 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:10:58.536742 kubelet[2079]: I0913 00:10:58.536674 2079 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:10:58.536742 kubelet[2079]: I0913 00:10:58.536717 2079 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:58.536848 kubelet[2079]: I0913 00:10:58.536808 2079 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:10:58.536848 kubelet[2079]: I0913 00:10:58.536819 2079 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:10:58.536848 kubelet[2079]: I0913 00:10:58.536843 2079 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:10:58.536914 kubelet[2079]: I0913 00:10:58.536856 2079 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:10:58.538668 kubelet[2079]: I0913 00:10:58.538625 2079 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:10:58.539344 kubelet[2079]: I0913 00:10:58.539320 2079 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:10:58.539784 kubelet[2079]: I0913 00:10:58.539767 2079 server.go:1274] "Started kubelet" Sep 13 00:10:58.542445 kubelet[2079]: I0913 00:10:58.542424 2079 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:10:58.542685 kubelet[2079]: I0913 00:10:58.542663 2079 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:10:58.547652 kubelet[2079]: I0913 00:10:58.547607 2079 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:10:58.547749 kubelet[2079]: I0913 00:10:58.547710 2079 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:10:58.547869 kubelet[2079]: E0913 00:10:58.547810 2079 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:58.548224 kubelet[2079]: I0913 00:10:58.548142 2079 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:10:58.548356 kubelet[2079]: I0913 00:10:58.548335 2079 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:10:58.548585 kubelet[2079]: I0913 00:10:58.548560 2079 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:10:58.549771 kubelet[2079]: I0913 00:10:58.549716 2079 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:10:58.550301 kubelet[2079]: I0913 00:10:58.549914 2079 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:10:58.562925 kubelet[2079]: I0913 00:10:58.554040 2079 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:10:58.562925 kubelet[2079]: I0913 00:10:58.554182 2079 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:10:58.562925 kubelet[2079]: I0913 00:10:58.556603 2079 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:10:58.573630 kubelet[2079]: E0913 00:10:58.572873 2079 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:10:58.579496 kubelet[2079]: I0913 00:10:58.579460 2079 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:10:58.580995 kubelet[2079]: I0913 00:10:58.580974 2079 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:10:58.581362 kubelet[2079]: I0913 00:10:58.581322 2079 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:10:58.581456 kubelet[2079]: I0913 00:10:58.581444 2079 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:10:58.581556 kubelet[2079]: E0913 00:10:58.581538 2079 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:10:58.615403 kubelet[2079]: I0913 00:10:58.615355 2079 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:10:58.615403 kubelet[2079]: I0913 00:10:58.615374 2079 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:10:58.615403 kubelet[2079]: I0913 00:10:58.615394 2079 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:58.615587 kubelet[2079]: I0913 00:10:58.615539 2079 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:10:58.615587 kubelet[2079]: I0913 00:10:58.615550 2079 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:10:58.615587 kubelet[2079]: I0913 00:10:58.615570 2079 policy_none.go:49] "None policy: Start" Sep 13 00:10:58.616072 kubelet[2079]: I0913 00:10:58.616056 2079 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:10:58.616138 kubelet[2079]: I0913 00:10:58.616076 2079 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:10:58.616237 kubelet[2079]: I0913 00:10:58.616223 2079 state_mem.go:75] "Updated machine memory state" Sep 13 00:10:58.617317 kubelet[2079]: I0913 00:10:58.617289 2079 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:10:58.617454 kubelet[2079]: I0913 00:10:58.617440 2079 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:10:58.617478 kubelet[2079]: I0913 00:10:58.617457 2079 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:10:58.620080 kubelet[2079]: I0913 00:10:58.619083 2079 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:10:58.702263 kubelet[2079]: E0913 00:10:58.702222 2079 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:10:58.724526 kubelet[2079]: I0913 00:10:58.724484 2079 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:10:58.732601 kubelet[2079]: I0913 00:10:58.732567 2079 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:10:58.732727 kubelet[2079]: I0913 00:10:58.732659 2079 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:10:58.850171 kubelet[2079]: I0913 00:10:58.850132 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:58.850344 kubelet[2079]: I0913 00:10:58.850185 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:58.850344 kubelet[2079]: I0913 00:10:58.850223 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b565be16af10f6d951871a7d58fc3ee-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b565be16af10f6d951871a7d58fc3ee\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:10:58.850344 kubelet[2079]: I0913 00:10:58.850240 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:10:58.850344 kubelet[2079]: I0913 00:10:58.850256 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b565be16af10f6d951871a7d58fc3ee-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b565be16af10f6d951871a7d58fc3ee\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:10:58.850344 kubelet[2079]: I0913 00:10:58.850273 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b565be16af10f6d951871a7d58fc3ee-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8b565be16af10f6d951871a7d58fc3ee\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:10:58.850457 kubelet[2079]: I0913 00:10:58.850288 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:58.850457 kubelet[2079]: I0913 00:10:58.850302 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:58.850457 kubelet[2079]: I0913 00:10:58.850317 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:59.003275 kubelet[2079]: E0913 00:10:59.003244 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:59.005564 kubelet[2079]: E0913 00:10:59.003293 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:59.005564 kubelet[2079]: E0913 00:10:59.003618 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:59.235668 sudo[2116]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:10:59.236004 sudo[2116]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:10:59.537823 kubelet[2079]: I0913 00:10:59.537784 2079 apiserver.go:52] "Watching apiserver" Sep 13 00:10:59.548915 kubelet[2079]: I0913 00:10:59.548735 2079 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:10:59.600463 kubelet[2079]: E0913 00:10:59.600423 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:59.601349 kubelet[2079]: E0913 00:10:59.601305 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:59.601597 kubelet[2079]: E0913 00:10:59.601571 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:59.603838 kubelet[2079]: I0913 00:10:59.603765 2079 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.603747684 podStartE2EDuration="1.603747684s" podCreationTimestamp="2025-09-13 00:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:59.603522697 +0000 UTC m=+1.120038056" watchObservedRunningTime="2025-09-13 00:10:59.603747684 +0000 UTC m=+1.120263083" Sep 13 00:10:59.624079 kubelet[2079]: I0913 00:10:59.623914 2079 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.623896139 podStartE2EDuration="1.623896139s" podCreationTimestamp="2025-09-13 00:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:59.613540038 +0000 UTC m=+1.130055437" watchObservedRunningTime="2025-09-13 00:10:59.623896139 +0000 UTC m=+1.140411498" Sep 13 00:10:59.624381 kubelet[2079]: I0913 00:10:59.624347 2079 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.624329756 podStartE2EDuration="2.624329756s" podCreationTimestamp="2025-09-13 00:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:59.624110888 +0000 UTC m=+1.140626287" watchObservedRunningTime="2025-09-13 00:10:59.624329756 +0000 UTC m=+1.140845155" Sep 13 00:10:59.780437 sudo[2116]: pam_unix(sudo:session): session closed for user root Sep 13 00:11:00.601267 kubelet[2079]: E0913 00:11:00.601221 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:01.602926 kubelet[2079]: E0913 00:11:01.602895 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:02.239931 sudo[1443]: pam_unix(sudo:session): session closed for user root Sep 13 00:11:02.241410 sshd[1437]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:02.243878 systemd[1]: sshd@4-10.0.0.44:22-10.0.0.1:41770.service: Deactivated successfully. Sep 13 00:11:02.244770 systemd-logind[1308]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:11:02.244829 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:11:02.245625 systemd-logind[1308]: Removed session 5. Sep 13 00:11:04.055778 kubelet[2079]: I0913 00:11:04.055711 2079 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:11:04.056406 env[1324]: time="2025-09-13T00:11:04.056092041Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:11:04.056729 kubelet[2079]: I0913 00:11:04.056707 2079 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:11:05.287383 kubelet[2079]: I0913 00:11:05.287344 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c84b9222-a5d9-455a-a33e-66e54977b741-hubble-tls\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.287856 kubelet[2079]: I0913 00:11:05.287832 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76ae6790-5578-41aa-9517-dcecee93fa0a-lib-modules\") pod \"kube-proxy-8mnkw\" (UID: \"76ae6790-5578-41aa-9517-dcecee93fa0a\") " pod="kube-system/kube-proxy-8mnkw" Sep 13 00:11:05.287941 kubelet[2079]: I0913 00:11:05.287927 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szkbh\" (UniqueName: \"kubernetes.io/projected/76ae6790-5578-41aa-9517-dcecee93fa0a-kube-api-access-szkbh\") pod \"kube-proxy-8mnkw\" (UID: \"76ae6790-5578-41aa-9517-dcecee93fa0a\") " pod="kube-system/kube-proxy-8mnkw" Sep 13 00:11:05.288019 kubelet[2079]: I0913 00:11:05.288007 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-hostproc\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.288098 kubelet[2079]: I0913 00:11:05.288085 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-host-proc-sys-kernel\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.288173 kubelet[2079]: I0913 00:11:05.288161 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76ae6790-5578-41aa-9517-dcecee93fa0a-xtables-lock\") pod \"kube-proxy-8mnkw\" (UID: \"76ae6790-5578-41aa-9517-dcecee93fa0a\") " pod="kube-system/kube-proxy-8mnkw" Sep 13 00:11:05.288265 kubelet[2079]: I0913 00:11:05.288252 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-bpf-maps\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.288347 kubelet[2079]: I0913 00:11:05.288333 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-host-proc-sys-net\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.288422 kubelet[2079]: I0913 00:11:05.288408 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76ae6790-5578-41aa-9517-dcecee93fa0a-kube-proxy\") pod \"kube-proxy-8mnkw\" (UID: \"76ae6790-5578-41aa-9517-dcecee93fa0a\") " pod="kube-system/kube-proxy-8mnkw" Sep 13 00:11:05.288494 kubelet[2079]: I0913 00:11:05.288482 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-cni-path\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.288570 kubelet[2079]: I0913 00:11:05.288558 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrbsc\" (UniqueName: \"kubernetes.io/projected/0b3d9826-d8fc-43ec-9b03-0014d8e17d29-kube-api-access-lrbsc\") pod \"cilium-operator-5d85765b45-2fcgs\" (UID: \"0b3d9826-d8fc-43ec-9b03-0014d8e17d29\") " pod="kube-system/cilium-operator-5d85765b45-2fcgs" Sep 13 00:11:05.288650 kubelet[2079]: I0913 00:11:05.288637 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-cilium-run\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.288722 kubelet[2079]: I0913 00:11:05.288710 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-xtables-lock\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.288803 kubelet[2079]: I0913 00:11:05.288791 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c84b9222-a5d9-455a-a33e-66e54977b741-clustermesh-secrets\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.288874 kubelet[2079]: I0913 00:11:05.288863 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vncm2\" (UniqueName: \"kubernetes.io/projected/c84b9222-a5d9-455a-a33e-66e54977b741-kube-api-access-vncm2\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.288947 kubelet[2079]: I0913 00:11:05.288935 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b3d9826-d8fc-43ec-9b03-0014d8e17d29-cilium-config-path\") pod \"cilium-operator-5d85765b45-2fcgs\" (UID: \"0b3d9826-d8fc-43ec-9b03-0014d8e17d29\") " pod="kube-system/cilium-operator-5d85765b45-2fcgs" Sep 13 00:11:05.289029 kubelet[2079]: I0913 00:11:05.289009 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-cilium-cgroup\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.289100 kubelet[2079]: I0913 00:11:05.289087 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-etc-cni-netd\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.289166 kubelet[2079]: I0913 00:11:05.289153 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-lib-modules\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.289249 kubelet[2079]: I0913 00:11:05.289237 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c84b9222-a5d9-455a-a33e-66e54977b741-cilium-config-path\") pod \"cilium-vpfpm\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " pod="kube-system/cilium-vpfpm" Sep 13 00:11:05.392587 kubelet[2079]: I0913 00:11:05.392545 2079 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:11:05.411698 kubelet[2079]: E0913 00:11:05.408958 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:05.411812 env[1324]: time="2025-09-13T00:11:05.411299990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vpfpm,Uid:c84b9222-a5d9-455a-a33e-66e54977b741,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:05.428525 env[1324]: time="2025-09-13T00:11:05.428465223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:05.428685 env[1324]: time="2025-09-13T00:11:05.428661911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:05.428800 env[1324]: time="2025-09-13T00:11:05.428777212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:05.429083 env[1324]: time="2025-09-13T00:11:05.429046369Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67 pid=2175 runtime=io.containerd.runc.v2 Sep 13 00:11:05.466145 env[1324]: time="2025-09-13T00:11:05.466109834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vpfpm,Uid:c84b9222-a5d9-455a-a33e-66e54977b741,Namespace:kube-system,Attempt:0,} returns sandbox id \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\"" Sep 13 00:11:05.466791 kubelet[2079]: E0913 00:11:05.466766 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:05.472995 env[1324]: time="2025-09-13T00:11:05.472957570Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:11:05.483530 kubelet[2079]: E0913 00:11:05.483493 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:05.484874 env[1324]: time="2025-09-13T00:11:05.484844493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2fcgs,Uid:0b3d9826-d8fc-43ec-9b03-0014d8e17d29,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:05.499817 env[1324]: time="2025-09-13T00:11:05.499745651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:05.499817 env[1324]: time="2025-09-13T00:11:05.499786564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:05.499817 env[1324]: time="2025-09-13T00:11:05.499797642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:05.499938 env[1324]: time="2025-09-13T00:11:05.499915104Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d245ebb35c9ddc223af00c7669fe06b20be4c804c8f2363af01e1b35a9c1980a pid=2216 runtime=io.containerd.runc.v2 Sep 13 00:11:05.547595 env[1324]: time="2025-09-13T00:11:05.546603496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2fcgs,Uid:0b3d9826-d8fc-43ec-9b03-0014d8e17d29,Namespace:kube-system,Attempt:0,} returns sandbox id \"d245ebb35c9ddc223af00c7669fe06b20be4c804c8f2363af01e1b35a9c1980a\"" Sep 13 00:11:05.547749 kubelet[2079]: E0913 00:11:05.547179 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:05.694831 kubelet[2079]: E0913 00:11:05.694453 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:05.695611 env[1324]: time="2025-09-13T00:11:05.695025728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mnkw,Uid:76ae6790-5578-41aa-9517-dcecee93fa0a,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:05.708630 env[1324]: time="2025-09-13T00:11:05.708562305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:05.708777 env[1324]: time="2025-09-13T00:11:05.708755434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:05.708863 env[1324]: time="2025-09-13T00:11:05.708843620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:05.709138 env[1324]: time="2025-09-13T00:11:05.709109297Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3ebe132845544aea5245db4dbfafc170e9a52e8f4766f8a2e2895e2fa64962c pid=2257 runtime=io.containerd.runc.v2 Sep 13 00:11:05.743783 env[1324]: time="2025-09-13T00:11:05.743738794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mnkw,Uid:76ae6790-5578-41aa-9517-dcecee93fa0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3ebe132845544aea5245db4dbfafc170e9a52e8f4766f8a2e2895e2fa64962c\"" Sep 13 00:11:05.745705 kubelet[2079]: E0913 00:11:05.744643 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:05.748115 env[1324]: time="2025-09-13T00:11:05.748080934Z" level=info msg="CreateContainer within sandbox \"d3ebe132845544aea5245db4dbfafc170e9a52e8f4766f8a2e2895e2fa64962c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:11:05.763886 env[1324]: time="2025-09-13T00:11:05.763835434Z" level=info msg="CreateContainer within sandbox \"d3ebe132845544aea5245db4dbfafc170e9a52e8f4766f8a2e2895e2fa64962c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e040dfb981548c3d09c5051b8ed312690ac4a80cf17e98080a8563347874f8ed\"" Sep 13 00:11:05.765942 env[1324]: time="2025-09-13T00:11:05.764446695Z" level=info msg="StartContainer for \"e040dfb981548c3d09c5051b8ed312690ac4a80cf17e98080a8563347874f8ed\"" Sep 13 00:11:05.832567 env[1324]: time="2025-09-13T00:11:05.832448172Z" level=info msg="StartContainer for \"e040dfb981548c3d09c5051b8ed312690ac4a80cf17e98080a8563347874f8ed\" returns successfully" Sep 13 00:11:06.612555 kubelet[2079]: E0913 00:11:06.612503 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:06.628599 kubelet[2079]: I0913 00:11:06.628545 2079 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8mnkw" podStartSLOduration=1.6285278 podStartE2EDuration="1.6285278s" podCreationTimestamp="2025-09-13 00:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:06.628382342 +0000 UTC m=+8.144897741" watchObservedRunningTime="2025-09-13 00:11:06.6285278 +0000 UTC m=+8.145043199" Sep 13 00:11:07.315922 kubelet[2079]: E0913 00:11:07.311937 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:07.614617 kubelet[2079]: E0913 00:11:07.613668 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:08.138873 kubelet[2079]: E0913 00:11:08.138829 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:08.614838 kubelet[2079]: E0913 00:11:08.614798 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:09.983610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4288478573.mount: Deactivated successfully. Sep 13 00:11:11.389794 kubelet[2079]: E0913 00:11:11.389752 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:12.747125 env[1324]: time="2025-09-13T00:11:12.747076590Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:11:12.750970 env[1324]: time="2025-09-13T00:11:12.750913396Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:11:12.754591 env[1324]: time="2025-09-13T00:11:12.754542264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:11:12.755142 env[1324]: time="2025-09-13T00:11:12.755097567Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 13 00:11:12.762223 env[1324]: time="2025-09-13T00:11:12.762171641Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:11:12.764536 env[1324]: time="2025-09-13T00:11:12.764497362Z" level=info msg="CreateContainer within sandbox \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:11:12.784402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829603270.mount: Deactivated successfully. Sep 13 00:11:12.789238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount197378432.mount: Deactivated successfully. Sep 13 00:11:12.792055 env[1324]: time="2025-09-13T00:11:12.792006899Z" level=info msg="CreateContainer within sandbox \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f\"" Sep 13 00:11:12.794680 env[1324]: time="2025-09-13T00:11:12.794621271Z" level=info msg="StartContainer for \"f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f\"" Sep 13 00:11:12.848237 env[1324]: time="2025-09-13T00:11:12.845630957Z" level=info msg="StartContainer for \"f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f\" returns successfully" Sep 13 00:11:13.017060 env[1324]: time="2025-09-13T00:11:13.016945277Z" level=info msg="shim disconnected" id=f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f Sep 13 00:11:13.017060 env[1324]: time="2025-09-13T00:11:13.016992672Z" level=warning msg="cleaning up after shim disconnected" id=f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f namespace=k8s.io Sep 13 00:11:13.017060 env[1324]: time="2025-09-13T00:11:13.017002791Z" level=info msg="cleaning up dead shim" Sep 13 00:11:13.026308 env[1324]: time="2025-09-13T00:11:13.026262780Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:11:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2504 runtime=io.containerd.runc.v2\n" Sep 13 00:11:13.629834 kubelet[2079]: E0913 00:11:13.629399 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:13.633228 env[1324]: time="2025-09-13T00:11:13.631802809Z" level=info msg="CreateContainer within sandbox \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:11:13.649610 env[1324]: time="2025-09-13T00:11:13.649242811Z" level=info msg="CreateContainer within sandbox \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574\"" Sep 13 00:11:13.652827 env[1324]: time="2025-09-13T00:11:13.651383645Z" level=info msg="StartContainer for \"1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574\"" Sep 13 00:11:13.711693 env[1324]: time="2025-09-13T00:11:13.711646368Z" level=info msg="StartContainer for \"1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574\" returns successfully" Sep 13 00:11:13.720292 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:11:13.720578 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:11:13.720775 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:11:13.723732 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:11:13.733050 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:11:13.745739 env[1324]: time="2025-09-13T00:11:13.745687453Z" level=info msg="shim disconnected" id=1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574 Sep 13 00:11:13.745739 env[1324]: time="2025-09-13T00:11:13.745740288Z" level=warning msg="cleaning up after shim disconnected" id=1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574 namespace=k8s.io Sep 13 00:11:13.745961 env[1324]: time="2025-09-13T00:11:13.745750967Z" level=info msg="cleaning up dead shim" Sep 13 00:11:13.753669 env[1324]: time="2025-09-13T00:11:13.753622970Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:11:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2568 runtime=io.containerd.runc.v2\n" Sep 13 00:11:13.780445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f-rootfs.mount: Deactivated successfully. Sep 13 00:11:14.025399 update_engine[1312]: I0913 00:11:14.025243 1312 update_attempter.cc:509] Updating boot flags... Sep 13 00:11:14.628282 env[1324]: time="2025-09-13T00:11:14.628237204Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:11:14.630434 env[1324]: time="2025-09-13T00:11:14.629996245Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:11:14.632352 env[1324]: time="2025-09-13T00:11:14.632319835Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:11:14.633463 env[1324]: time="2025-09-13T00:11:14.632920941Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 13 00:11:14.638018 kubelet[2079]: E0913 00:11:14.637829 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:14.645655 env[1324]: time="2025-09-13T00:11:14.645604597Z" level=info msg="CreateContainer within sandbox \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:11:14.645819 env[1324]: time="2025-09-13T00:11:14.645778702Z" level=info msg="CreateContainer within sandbox \"d245ebb35c9ddc223af00c7669fe06b20be4c804c8f2363af01e1b35a9c1980a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:11:14.679791 env[1324]: time="2025-09-13T00:11:14.679719721Z" level=info msg="CreateContainer within sandbox \"d245ebb35c9ddc223af00c7669fe06b20be4c804c8f2363af01e1b35a9c1980a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\"" Sep 13 00:11:14.679933 env[1324]: time="2025-09-13T00:11:14.679901464Z" level=info msg="CreateContainer within sandbox \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9\"" Sep 13 00:11:14.680668 env[1324]: time="2025-09-13T00:11:14.680624239Z" level=info msg="StartContainer for \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\"" Sep 13 00:11:14.681389 env[1324]: time="2025-09-13T00:11:14.681342334Z" level=info msg="StartContainer for \"ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9\"" Sep 13 00:11:14.805849 env[1324]: time="2025-09-13T00:11:14.803383328Z" level=info msg="StartContainer for \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\" returns successfully" Sep 13 00:11:14.805849 env[1324]: time="2025-09-13T00:11:14.803575911Z" level=info msg="StartContainer for \"ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9\" returns successfully" Sep 13 00:11:14.824078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9-rootfs.mount: Deactivated successfully. Sep 13 00:11:14.832118 env[1324]: time="2025-09-13T00:11:14.832067261Z" level=info msg="shim disconnected" id=ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9 Sep 13 00:11:14.832118 env[1324]: time="2025-09-13T00:11:14.832120216Z" level=warning msg="cleaning up after shim disconnected" id=ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9 namespace=k8s.io Sep 13 00:11:14.832327 env[1324]: time="2025-09-13T00:11:14.832131455Z" level=info msg="cleaning up dead shim" Sep 13 00:11:14.849916 env[1324]: time="2025-09-13T00:11:14.849859217Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:11:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2676 runtime=io.containerd.runc.v2\n" Sep 13 00:11:15.642387 kubelet[2079]: E0913 00:11:15.642350 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:15.645842 kubelet[2079]: E0913 00:11:15.645790 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:15.647702 env[1324]: time="2025-09-13T00:11:15.647656712Z" level=info msg="CreateContainer within sandbox \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:11:15.655873 kubelet[2079]: I0913 00:11:15.655807 2079 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-2fcgs" podStartSLOduration=1.569113518 podStartE2EDuration="10.655789344s" podCreationTimestamp="2025-09-13 00:11:05 +0000 UTC" firstStartedPulling="2025-09-13 00:11:05.547921284 +0000 UTC m=+7.064436643" lastFinishedPulling="2025-09-13 00:11:14.63459707 +0000 UTC m=+16.151112469" observedRunningTime="2025-09-13 00:11:15.654044972 +0000 UTC m=+17.170560371" watchObservedRunningTime="2025-09-13 00:11:15.655789344 +0000 UTC m=+17.172304943" Sep 13 00:11:15.679738 env[1324]: time="2025-09-13T00:11:15.679684804Z" level=info msg="CreateContainer within sandbox \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764\"" Sep 13 00:11:15.680545 env[1324]: time="2025-09-13T00:11:15.680505734Z" level=info msg="StartContainer for \"833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764\"" Sep 13 00:11:15.746986 env[1324]: time="2025-09-13T00:11:15.746938878Z" level=info msg="StartContainer for \"833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764\" returns successfully" Sep 13 00:11:15.761557 env[1324]: time="2025-09-13T00:11:15.761500886Z" level=info msg="shim disconnected" id=833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764 Sep 13 00:11:15.761805 env[1324]: time="2025-09-13T00:11:15.761784822Z" level=warning msg="cleaning up after shim disconnected" id=833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764 namespace=k8s.io Sep 13 00:11:15.761881 env[1324]: time="2025-09-13T00:11:15.761868655Z" level=info msg="cleaning up dead shim" Sep 13 00:11:15.768384 env[1324]: time="2025-09-13T00:11:15.768346348Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:11:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2731 runtime=io.containerd.runc.v2\n" Sep 13 00:11:15.780208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185648449.mount: Deactivated successfully. Sep 13 00:11:16.650175 kubelet[2079]: E0913 00:11:16.650131 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:16.650756 kubelet[2079]: E0913 00:11:16.650732 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:16.653290 env[1324]: time="2025-09-13T00:11:16.652831690Z" level=info msg="CreateContainer within sandbox \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:11:16.679459 env[1324]: time="2025-09-13T00:11:16.679399745Z" level=info msg="CreateContainer within sandbox \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\"" Sep 13 00:11:16.679967 env[1324]: time="2025-09-13T00:11:16.679934702Z" level=info msg="StartContainer for \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\"" Sep 13 00:11:16.733653 env[1324]: time="2025-09-13T00:11:16.733602688Z" level=info msg="StartContainer for \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\" returns successfully" Sep 13 00:11:16.883279 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 00:11:16.893722 kubelet[2079]: I0913 00:11:16.893690 2079 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:11:17.011218 kubelet[2079]: I0913 00:11:17.011085 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r57k5\" (UniqueName: \"kubernetes.io/projected/e370fe16-5884-4fa9-895f-df7883fbe6a9-kube-api-access-r57k5\") pod \"coredns-7c65d6cfc9-z6vl6\" (UID: \"e370fe16-5884-4fa9-895f-df7883fbe6a9\") " pod="kube-system/coredns-7c65d6cfc9-z6vl6" Sep 13 00:11:17.011218 kubelet[2079]: I0913 00:11:17.011133 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtkbh\" (UniqueName: \"kubernetes.io/projected/1077a582-194b-4e23-b1a6-865e432434e9-kube-api-access-gtkbh\") pod \"coredns-7c65d6cfc9-jdg2h\" (UID: \"1077a582-194b-4e23-b1a6-865e432434e9\") " pod="kube-system/coredns-7c65d6cfc9-jdg2h" Sep 13 00:11:17.011218 kubelet[2079]: I0913 00:11:17.011154 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1077a582-194b-4e23-b1a6-865e432434e9-config-volume\") pod \"coredns-7c65d6cfc9-jdg2h\" (UID: \"1077a582-194b-4e23-b1a6-865e432434e9\") " pod="kube-system/coredns-7c65d6cfc9-jdg2h" Sep 13 00:11:17.011218 kubelet[2079]: I0913 00:11:17.011184 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e370fe16-5884-4fa9-895f-df7883fbe6a9-config-volume\") pod \"coredns-7c65d6cfc9-z6vl6\" (UID: \"e370fe16-5884-4fa9-895f-df7883fbe6a9\") " pod="kube-system/coredns-7c65d6cfc9-z6vl6" Sep 13 00:11:17.123221 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 00:11:17.228294 kubelet[2079]: E0913 00:11:17.228249 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:17.229756 env[1324]: time="2025-09-13T00:11:17.229705378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jdg2h,Uid:1077a582-194b-4e23-b1a6-865e432434e9,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:17.235007 kubelet[2079]: E0913 00:11:17.234969 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:17.235524 env[1324]: time="2025-09-13T00:11:17.235459470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z6vl6,Uid:e370fe16-5884-4fa9-895f-df7883fbe6a9,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:17.654327 kubelet[2079]: E0913 00:11:17.654266 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:17.675000 kubelet[2079]: I0913 00:11:17.674904 2079 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vpfpm" podStartSLOduration=5.385410039 podStartE2EDuration="12.674870778s" podCreationTimestamp="2025-09-13 00:11:05 +0000 UTC" firstStartedPulling="2025-09-13 00:11:05.472498804 +0000 UTC m=+6.989014163" lastFinishedPulling="2025-09-13 00:11:12.761959503 +0000 UTC m=+14.278474902" observedRunningTime="2025-09-13 00:11:17.67390241 +0000 UTC m=+19.190417809" watchObservedRunningTime="2025-09-13 00:11:17.674870778 +0000 UTC m=+19.191386177" Sep 13 00:11:18.655385 kubelet[2079]: E0913 00:11:18.655352 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:18.754720 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:11:18.754387 systemd-networkd[1097]: cilium_host: Link UP Sep 13 00:11:18.754493 systemd-networkd[1097]: cilium_net: Link UP Sep 13 00:11:18.754495 systemd-networkd[1097]: cilium_net: Gained carrier Sep 13 00:11:18.754630 systemd-networkd[1097]: cilium_host: Gained carrier Sep 13 00:11:18.806302 systemd-networkd[1097]: cilium_net: Gained IPv6LL Sep 13 00:11:18.837337 systemd-networkd[1097]: cilium_vxlan: Link UP Sep 13 00:11:18.837343 systemd-networkd[1097]: cilium_vxlan: Gained carrier Sep 13 00:11:19.107813 kernel: NET: Registered PF_ALG protocol family Sep 13 00:11:19.656820 kubelet[2079]: E0913 00:11:19.656775 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:19.733645 systemd-networkd[1097]: lxc_health: Link UP Sep 13 00:11:19.748231 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:11:19.749015 systemd-networkd[1097]: lxc_health: Gained carrier Sep 13 00:11:19.764275 systemd-networkd[1097]: cilium_host: Gained IPv6LL Sep 13 00:11:20.292348 systemd-networkd[1097]: lxc6e074d90e29f: Link UP Sep 13 00:11:20.303226 kernel: eth0: renamed from tmpce59c Sep 13 00:11:20.316233 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:11:20.316336 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6e074d90e29f: link becomes ready Sep 13 00:11:20.316846 systemd-networkd[1097]: lxc6e074d90e29f: Gained carrier Sep 13 00:11:20.318287 systemd-networkd[1097]: lxcf26ba6a02aab: Link UP Sep 13 00:11:20.325220 kernel: eth0: renamed from tmp03e5d Sep 13 00:11:20.329992 systemd-networkd[1097]: lxcf26ba6a02aab: Gained carrier Sep 13 00:11:20.330235 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf26ba6a02aab: link becomes ready Sep 13 00:11:20.531302 systemd-networkd[1097]: cilium_vxlan: Gained IPv6LL Sep 13 00:11:20.657814 kubelet[2079]: E0913 00:11:20.657711 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:21.043298 systemd-networkd[1097]: lxc_health: Gained IPv6LL Sep 13 00:11:21.659035 kubelet[2079]: E0913 00:11:21.659000 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:21.811362 systemd-networkd[1097]: lxc6e074d90e29f: Gained IPv6LL Sep 13 00:11:21.939333 systemd-networkd[1097]: lxcf26ba6a02aab: Gained IPv6LL Sep 13 00:11:22.661464 kubelet[2079]: E0913 00:11:22.661424 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:23.891244 env[1324]: time="2025-09-13T00:11:23.891170537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:23.891672 env[1324]: time="2025-09-13T00:11:23.891625554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:23.891799 env[1324]: time="2025-09-13T00:11:23.891758627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:23.892903 env[1324]: time="2025-09-13T00:11:23.892850252Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce59c53257e0fd87d172633f384d9a3e995ebd81fd90b3b44b0411ddc3864d7b pid=3295 runtime=io.containerd.runc.v2 Sep 13 00:11:23.893091 env[1324]: time="2025-09-13T00:11:23.893042403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:23.893091 env[1324]: time="2025-09-13T00:11:23.893081401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:23.893174 env[1324]: time="2025-09-13T00:11:23.893091520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:23.893431 env[1324]: time="2025-09-13T00:11:23.893309829Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03e5d9366eda517836870cfed2ab1082b902431bc998ab9ed31c494d60cfc4fd pid=3305 runtime=io.containerd.runc.v2 Sep 13 00:11:23.919800 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:11:23.926591 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:11:23.941886 env[1324]: time="2025-09-13T00:11:23.941850540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jdg2h,Uid:1077a582-194b-4e23-b1a6-865e432434e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce59c53257e0fd87d172633f384d9a3e995ebd81fd90b3b44b0411ddc3864d7b\"" Sep 13 00:11:23.942756 kubelet[2079]: E0913 00:11:23.942723 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:23.945025 env[1324]: time="2025-09-13T00:11:23.944987062Z" level=info msg="CreateContainer within sandbox \"ce59c53257e0fd87d172633f384d9a3e995ebd81fd90b3b44b0411ddc3864d7b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:11:23.950703 env[1324]: time="2025-09-13T00:11:23.950659616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z6vl6,Uid:e370fe16-5884-4fa9-895f-df7883fbe6a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"03e5d9366eda517836870cfed2ab1082b902431bc998ab9ed31c494d60cfc4fd\"" Sep 13 00:11:23.951459 kubelet[2079]: E0913 00:11:23.951437 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:23.956332 env[1324]: time="2025-09-13T00:11:23.954287393Z" level=info msg="CreateContainer within sandbox \"03e5d9366eda517836870cfed2ab1082b902431bc998ab9ed31c494d60cfc4fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:11:23.959736 env[1324]: time="2025-09-13T00:11:23.959695480Z" level=info msg="CreateContainer within sandbox \"ce59c53257e0fd87d172633f384d9a3e995ebd81fd90b3b44b0411ddc3864d7b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c003ff9ff44309fd760da9ed87cb912bd765d8191bf1dee39d269025d1afb129\"" Sep 13 00:11:23.960189 env[1324]: time="2025-09-13T00:11:23.960156697Z" level=info msg="StartContainer for \"c003ff9ff44309fd760da9ed87cb912bd765d8191bf1dee39d269025d1afb129\"" Sep 13 00:11:23.967349 env[1324]: time="2025-09-13T00:11:23.967314456Z" level=info msg="CreateContainer within sandbox \"03e5d9366eda517836870cfed2ab1082b902431bc998ab9ed31c494d60cfc4fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d93fe5ff908704f993f3e2b6fcf49038fbdf2854f5c682b681ce135fb13d4c30\"" Sep 13 00:11:23.967832 env[1324]: time="2025-09-13T00:11:23.967809871Z" level=info msg="StartContainer for \"d93fe5ff908704f993f3e2b6fcf49038fbdf2854f5c682b681ce135fb13d4c30\"" Sep 13 00:11:24.004788 env[1324]: time="2025-09-13T00:11:24.004742179Z" level=info msg="StartContainer for \"c003ff9ff44309fd760da9ed87cb912bd765d8191bf1dee39d269025d1afb129\" returns successfully" Sep 13 00:11:24.020214 env[1324]: time="2025-09-13T00:11:24.020128051Z" level=info msg="StartContainer for \"d93fe5ff908704f993f3e2b6fcf49038fbdf2854f5c682b681ce135fb13d4c30\" returns successfully" Sep 13 00:11:24.665298 kubelet[2079]: E0913 00:11:24.665265 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:24.666794 kubelet[2079]: E0913 00:11:24.666772 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:24.685917 kubelet[2079]: I0913 00:11:24.685751 2079 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jdg2h" podStartSLOduration=19.68573745 podStartE2EDuration="19.68573745s" podCreationTimestamp="2025-09-13 00:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:24.676457889 +0000 UTC m=+26.192973288" watchObservedRunningTime="2025-09-13 00:11:24.68573745 +0000 UTC m=+26.202252809" Sep 13 00:11:25.669004 kubelet[2079]: E0913 00:11:25.668977 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:25.669382 kubelet[2079]: E0913 00:11:25.669020 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:26.671123 kubelet[2079]: E0913 00:11:26.671088 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:26.671903 kubelet[2079]: E0913 00:11:26.671880 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:29.020988 systemd[1]: Started sshd@5-10.0.0.44:22-10.0.0.1:37892.service. Sep 13 00:11:29.062881 sshd[3449]: Accepted publickey for core from 10.0.0.1 port 37892 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:11:29.064371 sshd[3449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:29.068949 systemd-logind[1308]: New session 6 of user core. Sep 13 00:11:29.069462 systemd[1]: Started session-6.scope. Sep 13 00:11:29.192884 sshd[3449]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:29.195443 systemd[1]: sshd@5-10.0.0.44:22-10.0.0.1:37892.service: Deactivated successfully. Sep 13 00:11:29.196396 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:11:29.196492 systemd-logind[1308]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:11:29.197537 systemd-logind[1308]: Removed session 6. Sep 13 00:11:34.196294 systemd[1]: Started sshd@6-10.0.0.44:22-10.0.0.1:60092.service. Sep 13 00:11:34.239493 sshd[3465]: Accepted publickey for core from 10.0.0.1 port 60092 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:11:34.241381 sshd[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:34.247323 systemd-logind[1308]: New session 7 of user core. Sep 13 00:11:34.248547 systemd[1]: Started session-7.scope. Sep 13 00:11:34.389104 sshd[3465]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:34.391764 systemd-logind[1308]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:11:34.391806 systemd[1]: sshd@6-10.0.0.44:22-10.0.0.1:60092.service: Deactivated successfully. Sep 13 00:11:34.392642 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:11:34.393032 systemd-logind[1308]: Removed session 7. Sep 13 00:11:39.392406 systemd[1]: Started sshd@7-10.0.0.44:22-10.0.0.1:60104.service. Sep 13 00:11:39.444034 sshd[3483]: Accepted publickey for core from 10.0.0.1 port 60104 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:11:39.448363 sshd[3483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:39.455640 systemd[1]: Started session-8.scope. Sep 13 00:11:39.456382 systemd-logind[1308]: New session 8 of user core. Sep 13 00:11:39.605319 sshd[3483]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:39.608696 systemd[1]: sshd@7-10.0.0.44:22-10.0.0.1:60104.service: Deactivated successfully. Sep 13 00:11:39.609932 systemd-logind[1308]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:11:39.609961 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:11:39.614324 systemd-logind[1308]: Removed session 8. Sep 13 00:11:44.610135 systemd[1]: Started sshd@8-10.0.0.44:22-10.0.0.1:45702.service. Sep 13 00:11:44.651189 sshd[3499]: Accepted publickey for core from 10.0.0.1 port 45702 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:11:44.651680 sshd[3499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:44.657041 systemd-logind[1308]: New session 9 of user core. Sep 13 00:11:44.660355 systemd[1]: Started session-9.scope. Sep 13 00:11:44.800747 sshd[3499]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:44.804557 systemd[1]: Started sshd@9-10.0.0.44:22-10.0.0.1:45706.service. Sep 13 00:11:44.806817 systemd[1]: sshd@8-10.0.0.44:22-10.0.0.1:45702.service: Deactivated successfully. Sep 13 00:11:44.807956 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:11:44.807970 systemd-logind[1308]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:11:44.809919 systemd-logind[1308]: Removed session 9. Sep 13 00:11:44.849203 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 45706 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:11:44.850475 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:44.853893 systemd-logind[1308]: New session 10 of user core. Sep 13 00:11:44.855062 systemd[1]: Started session-10.scope. Sep 13 00:11:45.098599 sshd[3512]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:45.102122 systemd[1]: Started sshd@10-10.0.0.44:22-10.0.0.1:45716.service. Sep 13 00:11:45.105652 systemd[1]: sshd@9-10.0.0.44:22-10.0.0.1:45706.service: Deactivated successfully. Sep 13 00:11:45.106595 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:11:45.109374 systemd-logind[1308]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:11:45.110377 systemd-logind[1308]: Removed session 10. Sep 13 00:11:45.159369 sshd[3524]: Accepted publickey for core from 10.0.0.1 port 45716 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:11:45.160958 sshd[3524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:45.166851 systemd[1]: Started session-11.scope. Sep 13 00:11:45.167401 systemd-logind[1308]: New session 11 of user core. Sep 13 00:11:45.282993 sshd[3524]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:45.285488 systemd[1]: sshd@10-10.0.0.44:22-10.0.0.1:45716.service: Deactivated successfully. Sep 13 00:11:45.286283 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:11:45.287141 systemd-logind[1308]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:11:45.287996 systemd-logind[1308]: Removed session 11. Sep 13 00:11:50.286591 systemd[1]: Started sshd@11-10.0.0.44:22-10.0.0.1:59644.service. Sep 13 00:11:50.323156 sshd[3540]: Accepted publickey for core from 10.0.0.1 port 59644 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:11:50.325046 sshd[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:50.328887 systemd-logind[1308]: New session 12 of user core. Sep 13 00:11:50.329703 systemd[1]: Started session-12.scope. Sep 13 00:11:50.444645 sshd[3540]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:50.447190 systemd[1]: sshd@11-10.0.0.44:22-10.0.0.1:59644.service: Deactivated successfully. Sep 13 00:11:50.448208 systemd-logind[1308]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:11:50.448232 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:11:50.450486 systemd-logind[1308]: Removed session 12. Sep 13 00:11:55.450147 systemd[1]: Started sshd@12-10.0.0.44:22-10.0.0.1:59660.service. Sep 13 00:11:55.499371 sshd[3554]: Accepted publickey for core from 10.0.0.1 port 59660 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:11:55.501630 sshd[3554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:55.507797 systemd-logind[1308]: New session 13 of user core. Sep 13 00:11:55.508958 systemd[1]: Started session-13.scope. Sep 13 00:11:55.663809 sshd[3554]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:55.665935 systemd[1]: Started sshd@13-10.0.0.44:22-10.0.0.1:59670.service. Sep 13 00:11:55.667456 systemd[1]: sshd@12-10.0.0.44:22-10.0.0.1:59660.service: Deactivated successfully. Sep 13 00:11:55.668429 systemd-logind[1308]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:11:55.668508 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:11:55.669669 systemd-logind[1308]: Removed session 13. Sep 13 00:11:55.730262 sshd[3567]: Accepted publickey for core from 10.0.0.1 port 59670 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:11:55.733876 sshd[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:55.739761 systemd-logind[1308]: New session 14 of user core. Sep 13 00:11:55.740682 systemd[1]: Started session-14.scope. Sep 13 00:11:55.964603 sshd[3567]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:55.966727 systemd[1]: Started sshd@14-10.0.0.44:22-10.0.0.1:59680.service. Sep 13 00:11:55.968450 systemd[1]: sshd@13-10.0.0.44:22-10.0.0.1:59670.service: Deactivated successfully. Sep 13 00:11:55.969371 systemd-logind[1308]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:11:55.969430 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:11:55.970377 systemd-logind[1308]: Removed session 14. Sep 13 00:11:56.005409 sshd[3579]: Accepted publickey for core from 10.0.0.1 port 59680 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:11:56.007136 sshd[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:56.012720 systemd-logind[1308]: New session 15 of user core. Sep 13 00:11:56.013703 systemd[1]: Started session-15.scope. Sep 13 00:11:57.253903 sshd[3579]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:57.255977 systemd[1]: Started sshd@15-10.0.0.44:22-10.0.0.1:59696.service. Sep 13 00:11:57.257570 systemd[1]: sshd@14-10.0.0.44:22-10.0.0.1:59680.service: Deactivated successfully. Sep 13 00:11:57.258355 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:11:57.260449 systemd-logind[1308]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:11:57.262578 systemd-logind[1308]: Removed session 15. Sep 13 00:11:57.305491 sshd[3599]: Accepted publickey for core from 10.0.0.1 port 59696 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:11:57.307174 sshd[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:57.312366 systemd[1]: Started session-16.scope. Sep 13 00:11:57.312653 systemd-logind[1308]: New session 16 of user core. Sep 13 00:11:57.557007 sshd[3599]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:57.559605 systemd[1]: Started sshd@16-10.0.0.44:22-10.0.0.1:59706.service. Sep 13 00:11:57.561838 systemd[1]: sshd@15-10.0.0.44:22-10.0.0.1:59696.service: Deactivated successfully. Sep 13 00:11:57.561971 systemd-logind[1308]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:11:57.562960 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:11:57.563959 systemd-logind[1308]: Removed session 16. Sep 13 00:11:57.597127 sshd[3612]: Accepted publickey for core from 10.0.0.1 port 59706 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:11:57.598445 sshd[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:57.602672 systemd-logind[1308]: New session 17 of user core. Sep 13 00:11:57.603019 systemd[1]: Started session-17.scope. Sep 13 00:11:57.722086 sshd[3612]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:57.724579 systemd[1]: sshd@16-10.0.0.44:22-10.0.0.1:59706.service: Deactivated successfully. Sep 13 00:11:57.725508 systemd-logind[1308]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:11:57.725543 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:11:57.726358 systemd-logind[1308]: Removed session 17. Sep 13 00:12:02.724785 systemd[1]: Started sshd@17-10.0.0.44:22-10.0.0.1:40656.service. Sep 13 00:12:02.765693 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 40656 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:12:02.766090 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:12:02.771953 systemd-logind[1308]: New session 18 of user core. Sep 13 00:12:02.773045 systemd[1]: Started session-18.scope. Sep 13 00:12:02.908497 sshd[3633]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:02.911236 systemd[1]: sshd@17-10.0.0.44:22-10.0.0.1:40656.service: Deactivated successfully. Sep 13 00:12:02.912266 systemd-logind[1308]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:12:02.912301 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:12:02.913067 systemd-logind[1308]: Removed session 18. Sep 13 00:12:07.911919 systemd[1]: Started sshd@18-10.0.0.44:22-10.0.0.1:40664.service. Sep 13 00:12:07.951582 sshd[3649]: Accepted publickey for core from 10.0.0.1 port 40664 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:12:07.953210 sshd[3649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:12:07.956626 systemd-logind[1308]: New session 19 of user core. Sep 13 00:12:07.957455 systemd[1]: Started session-19.scope. Sep 13 00:12:08.075871 sshd[3649]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:08.078735 systemd[1]: sshd@18-10.0.0.44:22-10.0.0.1:40664.service: Deactivated successfully. Sep 13 00:12:08.082494 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:12:08.083071 systemd-logind[1308]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:12:08.083760 systemd-logind[1308]: Removed session 19. Sep 13 00:12:09.582493 kubelet[2079]: E0913 00:12:09.582445 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:13.079209 systemd[1]: Started sshd@19-10.0.0.44:22-10.0.0.1:44830.service. Sep 13 00:12:13.117871 sshd[3664]: Accepted publickey for core from 10.0.0.1 port 44830 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:12:13.119150 sshd[3664]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:12:13.123363 systemd-logind[1308]: New session 20 of user core. Sep 13 00:12:13.124179 systemd[1]: Started session-20.scope. Sep 13 00:12:13.255639 systemd[1]: Started sshd@20-10.0.0.44:22-10.0.0.1:44834.service. Sep 13 00:12:13.256109 sshd[3664]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:13.259623 systemd[1]: sshd@19-10.0.0.44:22-10.0.0.1:44830.service: Deactivated successfully. Sep 13 00:12:13.260356 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:12:13.260552 systemd-logind[1308]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:12:13.264914 systemd-logind[1308]: Removed session 20. Sep 13 00:12:13.295816 sshd[3676]: Accepted publickey for core from 10.0.0.1 port 44834 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:12:13.296960 sshd[3676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:12:13.300487 systemd-logind[1308]: New session 21 of user core. Sep 13 00:12:13.301336 systemd[1]: Started session-21.scope. Sep 13 00:12:14.584635 kubelet[2079]: E0913 00:12:14.584586 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:14.588208 kubelet[2079]: E0913 00:12:14.588165 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:15.537798 kubelet[2079]: I0913 00:12:15.536762 2079 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-z6vl6" podStartSLOduration=70.536746296 podStartE2EDuration="1m10.536746296s" podCreationTimestamp="2025-09-13 00:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:24.696904362 +0000 UTC m=+26.213419721" watchObservedRunningTime="2025-09-13 00:12:15.536746296 +0000 UTC m=+77.053261655" Sep 13 00:12:15.538855 env[1324]: time="2025-09-13T00:12:15.538050978Z" level=info msg="StopContainer for \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\" with timeout 30 (s)" Sep 13 00:12:15.538855 env[1324]: time="2025-09-13T00:12:15.538384538Z" level=info msg="Stop container \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\" with signal terminated" Sep 13 00:12:15.567661 systemd[1]: run-containerd-runc-k8s.io-9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024-runc.VS8YXF.mount: Deactivated successfully. Sep 13 00:12:15.576795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f-rootfs.mount: Deactivated successfully. Sep 13 00:12:15.587828 env[1324]: time="2025-09-13T00:12:15.587770697Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:12:15.590646 env[1324]: time="2025-09-13T00:12:15.590598099Z" level=info msg="shim disconnected" id=0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f Sep 13 00:12:15.590646 env[1324]: time="2025-09-13T00:12:15.590640979Z" level=warning msg="cleaning up after shim disconnected" id=0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f namespace=k8s.io Sep 13 00:12:15.590646 env[1324]: time="2025-09-13T00:12:15.590651739Z" level=info msg="cleaning up dead shim" Sep 13 00:12:15.593123 env[1324]: time="2025-09-13T00:12:15.593082541Z" level=info msg="StopContainer for \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\" with timeout 2 (s)" Sep 13 00:12:15.593433 env[1324]: time="2025-09-13T00:12:15.593404101Z" level=info msg="Stop container \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\" with signal terminated" Sep 13 00:12:15.598677 env[1324]: time="2025-09-13T00:12:15.598413945Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:12:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3727 runtime=io.containerd.runc.v2\n" Sep 13 00:12:15.599754 systemd-networkd[1097]: lxc_health: Link DOWN Sep 13 00:12:15.599761 systemd-networkd[1097]: lxc_health: Lost carrier Sep 13 00:12:15.601234 env[1324]: time="2025-09-13T00:12:15.601171307Z" level=info msg="StopContainer for \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\" returns successfully" Sep 13 00:12:15.601915 env[1324]: time="2025-09-13T00:12:15.601890308Z" level=info msg="StopPodSandbox for \"d245ebb35c9ddc223af00c7669fe06b20be4c804c8f2363af01e1b35a9c1980a\"" Sep 13 00:12:15.602022 env[1324]: time="2025-09-13T00:12:15.601970668Z" level=info msg="Container to stop \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:12:15.603843 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d245ebb35c9ddc223af00c7669fe06b20be4c804c8f2363af01e1b35a9c1980a-shm.mount: Deactivated successfully. Sep 13 00:12:15.638351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d245ebb35c9ddc223af00c7669fe06b20be4c804c8f2363af01e1b35a9c1980a-rootfs.mount: Deactivated successfully. Sep 13 00:12:15.644020 env[1324]: time="2025-09-13T00:12:15.643966461Z" level=info msg="shim disconnected" id=d245ebb35c9ddc223af00c7669fe06b20be4c804c8f2363af01e1b35a9c1980a Sep 13 00:12:15.644020 env[1324]: time="2025-09-13T00:12:15.644014781Z" level=warning msg="cleaning up after shim disconnected" id=d245ebb35c9ddc223af00c7669fe06b20be4c804c8f2363af01e1b35a9c1980a namespace=k8s.io Sep 13 00:12:15.644020 env[1324]: time="2025-09-13T00:12:15.644024381Z" level=info msg="cleaning up dead shim" Sep 13 00:12:15.651241 env[1324]: time="2025-09-13T00:12:15.651164347Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:12:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3776 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T00:12:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Sep 13 00:12:15.651534 env[1324]: time="2025-09-13T00:12:15.651506227Z" level=info msg="TearDown network for sandbox \"d245ebb35c9ddc223af00c7669fe06b20be4c804c8f2363af01e1b35a9c1980a\" successfully" Sep 13 00:12:15.651579 env[1324]: time="2025-09-13T00:12:15.651533387Z" level=info msg="StopPodSandbox for \"d245ebb35c9ddc223af00c7669fe06b20be4c804c8f2363af01e1b35a9c1980a\" returns successfully" Sep 13 00:12:15.662504 env[1324]: time="2025-09-13T00:12:15.660388234Z" level=info msg="shim disconnected" id=9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024 Sep 13 00:12:15.662504 env[1324]: time="2025-09-13T00:12:15.660443154Z" level=warning msg="cleaning up after shim disconnected" id=9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024 namespace=k8s.io Sep 13 00:12:15.662504 env[1324]: time="2025-09-13T00:12:15.660453794Z" level=info msg="cleaning up dead shim" Sep 13 00:12:15.669365 env[1324]: time="2025-09-13T00:12:15.669320001Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:12:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3795 runtime=io.containerd.runc.v2\n" Sep 13 00:12:15.671236 env[1324]: time="2025-09-13T00:12:15.671182683Z" level=info msg="StopContainer for \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\" returns successfully" Sep 13 00:12:15.671716 env[1324]: time="2025-09-13T00:12:15.671685083Z" level=info msg="StopPodSandbox for \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\"" Sep 13 00:12:15.671755 env[1324]: time="2025-09-13T00:12:15.671741323Z" level=info msg="Container to stop \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:12:15.671782 env[1324]: time="2025-09-13T00:12:15.671757603Z" level=info msg="Container to stop \"833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:12:15.671782 env[1324]: time="2025-09-13T00:12:15.671770763Z" level=info msg="Container to stop \"f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:12:15.671834 env[1324]: time="2025-09-13T00:12:15.671782163Z" level=info msg="Container to stop \"1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:12:15.671834 env[1324]: time="2025-09-13T00:12:15.671800723Z" level=info msg="Container to stop \"ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:12:15.697086 env[1324]: time="2025-09-13T00:12:15.697021623Z" level=info msg="shim disconnected" id=b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67 Sep 13 00:12:15.697086 env[1324]: time="2025-09-13T00:12:15.697083983Z" level=warning msg="cleaning up after shim disconnected" id=b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67 namespace=k8s.io Sep 13 00:12:15.697086 env[1324]: time="2025-09-13T00:12:15.697093303Z" level=info msg="cleaning up dead shim" Sep 13 00:12:15.705545 env[1324]: time="2025-09-13T00:12:15.705492670Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:12:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3827 runtime=io.containerd.runc.v2\n" Sep 13 00:12:15.705833 env[1324]: time="2025-09-13T00:12:15.705805870Z" level=info msg="TearDown network for sandbox \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\" successfully" Sep 13 00:12:15.705868 env[1324]: time="2025-09-13T00:12:15.705833230Z" level=info msg="StopPodSandbox for \"b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67\" returns successfully" Sep 13 00:12:15.770254 kubelet[2079]: I0913 00:12:15.770206 2079 scope.go:117] "RemoveContainer" containerID="9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024" Sep 13 00:12:15.771576 env[1324]: time="2025-09-13T00:12:15.771540922Z" level=info msg="RemoveContainer for \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\"" Sep 13 00:12:15.798244 env[1324]: time="2025-09-13T00:12:15.775679605Z" level=info msg="RemoveContainer for \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\" returns successfully" Sep 13 00:12:15.798366 kubelet[2079]: I0913 00:12:15.795875 2079 scope.go:117] "RemoveContainer" containerID="833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764" Sep 13 00:12:15.798366 kubelet[2079]: I0913 00:12:15.796028 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-host-proc-sys-kernel\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.798366 kubelet[2079]: I0913 00:12:15.796048 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-cilium-run\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.798366 kubelet[2079]: I0913 00:12:15.796072 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c84b9222-a5d9-455a-a33e-66e54977b741-hubble-tls\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.798366 kubelet[2079]: I0913 00:12:15.796087 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-host-proc-sys-net\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.798366 kubelet[2079]: I0913 00:12:15.796104 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrbsc\" (UniqueName: \"kubernetes.io/projected/0b3d9826-d8fc-43ec-9b03-0014d8e17d29-kube-api-access-lrbsc\") pod \"0b3d9826-d8fc-43ec-9b03-0014d8e17d29\" (UID: \"0b3d9826-d8fc-43ec-9b03-0014d8e17d29\") " Sep 13 00:12:15.798366 kubelet[2079]: I0913 00:12:15.796121 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-cni-path\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.798551 kubelet[2079]: I0913 00:12:15.796137 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-bpf-maps\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.798551 kubelet[2079]: I0913 00:12:15.796175 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c84b9222-a5d9-455a-a33e-66e54977b741-clustermesh-secrets\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.798551 kubelet[2079]: I0913 00:12:15.796207 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vncm2\" (UniqueName: \"kubernetes.io/projected/c84b9222-a5d9-455a-a33e-66e54977b741-kube-api-access-vncm2\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.798551 kubelet[2079]: I0913 00:12:15.796224 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-cilium-cgroup\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.798551 kubelet[2079]: I0913 00:12:15.796238 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-xtables-lock\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.798551 kubelet[2079]: I0913 00:12:15.796257 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b3d9826-d8fc-43ec-9b03-0014d8e17d29-cilium-config-path\") pod \"0b3d9826-d8fc-43ec-9b03-0014d8e17d29\" (UID: \"0b3d9826-d8fc-43ec-9b03-0014d8e17d29\") " Sep 13 00:12:15.798685 kubelet[2079]: I0913 00:12:15.796273 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c84b9222-a5d9-455a-a33e-66e54977b741-cilium-config-path\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.798685 kubelet[2079]: I0913 00:12:15.796287 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-hostproc\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.798685 kubelet[2079]: I0913 00:12:15.796303 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-etc-cni-netd\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.798685 kubelet[2079]: I0913 00:12:15.796316 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-lib-modules\") pod \"c84b9222-a5d9-455a-a33e-66e54977b741\" (UID: \"c84b9222-a5d9-455a-a33e-66e54977b741\") " Sep 13 00:12:15.803450 env[1324]: time="2025-09-13T00:12:15.803412507Z" level=info msg="RemoveContainer for \"833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764\"" Sep 13 00:12:15.804540 kubelet[2079]: I0913 00:12:15.804505 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:15.804635 kubelet[2079]: I0913 00:12:15.804566 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:15.804635 kubelet[2079]: I0913 00:12:15.804584 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:15.804820 kubelet[2079]: I0913 00:12:15.804793 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:15.806283 env[1324]: time="2025-09-13T00:12:15.806249389Z" level=info msg="RemoveContainer for \"833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764\" returns successfully" Sep 13 00:12:15.806424 kubelet[2079]: I0913 00:12:15.806384 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b3d9826-d8fc-43ec-9b03-0014d8e17d29-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b3d9826-d8fc-43ec-9b03-0014d8e17d29" (UID: "0b3d9826-d8fc-43ec-9b03-0014d8e17d29"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:12:15.808387 kubelet[2079]: I0913 00:12:15.806965 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:15.808387 kubelet[2079]: I0913 00:12:15.807006 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:15.808387 kubelet[2079]: I0913 00:12:15.807084 2079 scope.go:117] "RemoveContainer" containerID="ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9" Sep 13 00:12:15.808387 kubelet[2079]: I0913 00:12:15.807261 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-cni-path" (OuterVolumeSpecName: "cni-path") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:15.808387 kubelet[2079]: I0913 00:12:15.807295 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:15.808561 kubelet[2079]: I0913 00:12:15.807364 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-hostproc" (OuterVolumeSpecName: "hostproc") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:15.808561 kubelet[2079]: I0913 00:12:15.807386 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:15.808620 env[1324]: time="2025-09-13T00:12:15.808439991Z" level=info msg="RemoveContainer for \"ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9\"" Sep 13 00:12:15.808750 kubelet[2079]: I0913 00:12:15.808721 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c84b9222-a5d9-455a-a33e-66e54977b741-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:12:15.809372 kubelet[2079]: I0913 00:12:15.809347 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c84b9222-a5d9-455a-a33e-66e54977b741-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:12:15.809513 kubelet[2079]: I0913 00:12:15.809386 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c84b9222-a5d9-455a-a33e-66e54977b741-kube-api-access-vncm2" (OuterVolumeSpecName: "kube-api-access-vncm2") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "kube-api-access-vncm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:12:15.811442 env[1324]: time="2025-09-13T00:12:15.811411794Z" level=info msg="RemoveContainer for \"ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9\" returns successfully" Sep 13 00:12:15.811699 kubelet[2079]: I0913 00:12:15.811673 2079 scope.go:117] "RemoveContainer" containerID="1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574" Sep 13 00:12:15.812036 kubelet[2079]: I0913 00:12:15.812008 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c84b9222-a5d9-455a-a33e-66e54977b741-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c84b9222-a5d9-455a-a33e-66e54977b741" (UID: "c84b9222-a5d9-455a-a33e-66e54977b741"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:12:15.812584 env[1324]: time="2025-09-13T00:12:15.812558994Z" level=info msg="RemoveContainer for \"1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574\"" Sep 13 00:12:15.812707 kubelet[2079]: I0913 00:12:15.812572 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3d9826-d8fc-43ec-9b03-0014d8e17d29-kube-api-access-lrbsc" (OuterVolumeSpecName: "kube-api-access-lrbsc") pod "0b3d9826-d8fc-43ec-9b03-0014d8e17d29" (UID: "0b3d9826-d8fc-43ec-9b03-0014d8e17d29"). InnerVolumeSpecName "kube-api-access-lrbsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:12:15.815085 env[1324]: time="2025-09-13T00:12:15.815046876Z" level=info msg="RemoveContainer for \"1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574\" returns successfully" Sep 13 00:12:15.815396 kubelet[2079]: I0913 00:12:15.815370 2079 scope.go:117] "RemoveContainer" containerID="f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f" Sep 13 00:12:15.816544 env[1324]: time="2025-09-13T00:12:15.816510078Z" level=info msg="RemoveContainer for \"f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f\"" Sep 13 00:12:15.818908 env[1324]: time="2025-09-13T00:12:15.818878999Z" level=info msg="RemoveContainer for \"f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f\" returns successfully" Sep 13 00:12:15.819106 kubelet[2079]: I0913 00:12:15.819085 2079 scope.go:117] "RemoveContainer" containerID="9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024" Sep 13 00:12:15.819527 env[1324]: time="2025-09-13T00:12:15.819454080Z" level=error msg="ContainerStatus for \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\": not found" Sep 13 00:12:15.819795 kubelet[2079]: E0913 00:12:15.819755 2079 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\": not found" containerID="9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024" Sep 13 00:12:15.819995 kubelet[2079]: I0913 00:12:15.819885 2079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024"} err="failed to get container status \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024\": not found" Sep 13 00:12:15.820080 kubelet[2079]: I0913 00:12:15.820067 2079 scope.go:117] "RemoveContainer" containerID="833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764" Sep 13 00:12:15.820372 env[1324]: time="2025-09-13T00:12:15.820319081Z" level=error msg="ContainerStatus for \"833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764\": not found" Sep 13 00:12:15.820552 kubelet[2079]: E0913 00:12:15.820530 2079 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764\": not found" containerID="833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764" Sep 13 00:12:15.820596 kubelet[2079]: I0913 00:12:15.820558 2079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764"} err="failed to get container status \"833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764\": rpc error: code = NotFound desc = an error occurred when try to find container \"833e959b6a04571b32e7dd2a6443d7d45ca32a0bfd83dc791bb8f2295b857764\": not found" Sep 13 00:12:15.820596 kubelet[2079]: I0913 00:12:15.820577 2079 scope.go:117] "RemoveContainer" containerID="ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9" Sep 13 00:12:15.820817 env[1324]: time="2025-09-13T00:12:15.820767921Z" level=error msg="ContainerStatus for \"ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9\": not found" Sep 13 00:12:15.821047 kubelet[2079]: E0913 00:12:15.821025 2079 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9\": not found" containerID="ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9" Sep 13 00:12:15.821208 kubelet[2079]: I0913 00:12:15.821177 2079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9"} err="failed to get container status \"ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef9af16e90c6b3ad0001d985b56d910d7d545ffcef9cd96b773bc60f3ee90ae9\": not found" Sep 13 00:12:15.821326 kubelet[2079]: I0913 00:12:15.821314 2079 scope.go:117] "RemoveContainer" containerID="1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574" Sep 13 00:12:15.821612 env[1324]: time="2025-09-13T00:12:15.821564482Z" level=error msg="ContainerStatus for \"1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574\": not found" Sep 13 00:12:15.821844 kubelet[2079]: E0913 00:12:15.821822 2079 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574\": not found" containerID="1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574" Sep 13 00:12:15.821999 kubelet[2079]: I0913 00:12:15.821973 2079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574"} err="failed to get container status \"1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574\": rpc error: code = NotFound desc = an error occurred when try to find container \"1bd2228ddf88da0f1ae22acbfe4a41f3778eb7c41a255abb447b32ced1535574\": not found" Sep 13 00:12:15.822095 kubelet[2079]: I0913 00:12:15.822081 2079 scope.go:117] "RemoveContainer" containerID="f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f" Sep 13 00:12:15.822383 env[1324]: time="2025-09-13T00:12:15.822337682Z" level=error msg="ContainerStatus for \"f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f\": not found" Sep 13 00:12:15.822591 kubelet[2079]: E0913 00:12:15.822567 2079 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f\": not found" containerID="f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f" Sep 13 00:12:15.822642 kubelet[2079]: I0913 00:12:15.822594 2079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f"} err="failed to get container status \"f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9883076ea290766a0d241ba7ed39d008a3a10b17f219717aca8e4bd0f8ea50f\": not found" Sep 13 00:12:15.822642 kubelet[2079]: I0913 00:12:15.822609 2079 scope.go:117] "RemoveContainer" containerID="0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f" Sep 13 00:12:15.823850 env[1324]: time="2025-09-13T00:12:15.823819683Z" level=info msg="RemoveContainer for \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\"" Sep 13 00:12:15.826514 env[1324]: time="2025-09-13T00:12:15.826477605Z" level=info msg="RemoveContainer for \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\" returns successfully" Sep 13 00:12:15.826776 kubelet[2079]: I0913 00:12:15.826757 2079 scope.go:117] "RemoveContainer" containerID="0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f" Sep 13 00:12:15.827138 env[1324]: time="2025-09-13T00:12:15.827085566Z" level=error msg="ContainerStatus for \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\": not found" Sep 13 00:12:15.827290 kubelet[2079]: E0913 00:12:15.827268 2079 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\": not found" containerID="0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f" Sep 13 00:12:15.827380 kubelet[2079]: I0913 00:12:15.827360 2079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f"} err="failed to get container status \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a73c8febd9d92614bb3a990768dccf80e41cccda775ed2ac58d594eaf08ff1f\": not found" Sep 13 00:12:15.896765 kubelet[2079]: I0913 00:12:15.896718 2079 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.896765 kubelet[2079]: I0913 00:12:15.896754 2079 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.896765 kubelet[2079]: I0913 00:12:15.896765 2079 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c84b9222-a5d9-455a-a33e-66e54977b741-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.896765 kubelet[2079]: I0913 00:12:15.896776 2079 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vncm2\" (UniqueName: \"kubernetes.io/projected/c84b9222-a5d9-455a-a33e-66e54977b741-kube-api-access-vncm2\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.896983 kubelet[2079]: I0913 00:12:15.896786 2079 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.896983 kubelet[2079]: I0913 00:12:15.896794 2079 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.896983 kubelet[2079]: I0913 00:12:15.896801 2079 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b3d9826-d8fc-43ec-9b03-0014d8e17d29-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.896983 kubelet[2079]: I0913 00:12:15.896809 2079 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c84b9222-a5d9-455a-a33e-66e54977b741-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.896983 kubelet[2079]: I0913 00:12:15.896816 2079 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.896983 kubelet[2079]: I0913 00:12:15.896823 2079 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.896983 kubelet[2079]: I0913 00:12:15.896831 2079 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.896983 kubelet[2079]: I0913 00:12:15.896840 2079 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.897160 kubelet[2079]: I0913 00:12:15.896848 2079 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.897160 kubelet[2079]: I0913 00:12:15.896857 2079 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c84b9222-a5d9-455a-a33e-66e54977b741-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.897160 kubelet[2079]: I0913 00:12:15.896865 2079 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c84b9222-a5d9-455a-a33e-66e54977b741-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:15.897160 kubelet[2079]: I0913 00:12:15.896872 2079 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrbsc\" (UniqueName: \"kubernetes.io/projected/0b3d9826-d8fc-43ec-9b03-0014d8e17d29-kube-api-access-lrbsc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:16.562156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b91d47495b642ae24006eb101cc3011f9c1f2dcbb42e8e314dce55c6a37f024-rootfs.mount: Deactivated successfully. Sep 13 00:12:16.562332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67-rootfs.mount: Deactivated successfully. Sep 13 00:12:16.562427 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b697d07de52241c1b1801e9ae224ec238765fbf929e3280b78d10050df6adb67-shm.mount: Deactivated successfully. Sep 13 00:12:16.562506 systemd[1]: var-lib-kubelet-pods-0b3d9826\x2dd8fc\x2d43ec\x2d9b03\x2d0014d8e17d29-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlrbsc.mount: Deactivated successfully. Sep 13 00:12:16.562584 systemd[1]: var-lib-kubelet-pods-c84b9222\x2da5d9\x2d455a\x2da33e\x2d66e54977b741-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvncm2.mount: Deactivated successfully. Sep 13 00:12:16.562710 systemd[1]: var-lib-kubelet-pods-c84b9222\x2da5d9\x2d455a\x2da33e\x2d66e54977b741-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:12:16.562795 systemd[1]: var-lib-kubelet-pods-c84b9222\x2da5d9\x2d455a\x2da33e\x2d66e54977b741-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:12:16.586146 kubelet[2079]: I0913 00:12:16.586100 2079 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b3d9826-d8fc-43ec-9b03-0014d8e17d29" path="/var/lib/kubelet/pods/0b3d9826-d8fc-43ec-9b03-0014d8e17d29/volumes" Sep 13 00:12:16.588027 kubelet[2079]: I0913 00:12:16.588001 2079 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c84b9222-a5d9-455a-a33e-66e54977b741" path="/var/lib/kubelet/pods/c84b9222-a5d9-455a-a33e-66e54977b741/volumes" Sep 13 00:12:17.522630 sshd[3676]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:17.524841 systemd[1]: Started sshd@21-10.0.0.44:22-10.0.0.1:44846.service. Sep 13 00:12:17.525689 systemd[1]: sshd@20-10.0.0.44:22-10.0.0.1:44834.service: Deactivated successfully. Sep 13 00:12:17.526536 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:12:17.526659 systemd-logind[1308]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:12:17.527616 systemd-logind[1308]: Removed session 21. Sep 13 00:12:17.563435 sshd[3843]: Accepted publickey for core from 10.0.0.1 port 44846 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:12:17.564709 sshd[3843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:12:17.569073 systemd-logind[1308]: New session 22 of user core. Sep 13 00:12:17.569906 systemd[1]: Started session-22.scope. Sep 13 00:12:18.437852 kubelet[2079]: E0913 00:12:18.437812 2079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c84b9222-a5d9-455a-a33e-66e54977b741" containerName="mount-bpf-fs" Sep 13 00:12:18.437852 kubelet[2079]: E0913 00:12:18.437842 2079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b3d9826-d8fc-43ec-9b03-0014d8e17d29" containerName="cilium-operator" Sep 13 00:12:18.437852 kubelet[2079]: E0913 00:12:18.437849 2079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c84b9222-a5d9-455a-a33e-66e54977b741" containerName="clean-cilium-state" Sep 13 00:12:18.437852 kubelet[2079]: E0913 00:12:18.437856 2079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c84b9222-a5d9-455a-a33e-66e54977b741" containerName="mount-cgroup" Sep 13 00:12:18.437852 kubelet[2079]: E0913 00:12:18.437861 2079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c84b9222-a5d9-455a-a33e-66e54977b741" containerName="apply-sysctl-overwrites" Sep 13 00:12:18.438549 kubelet[2079]: E0913 00:12:18.437868 2079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c84b9222-a5d9-455a-a33e-66e54977b741" containerName="cilium-agent" Sep 13 00:12:18.438549 kubelet[2079]: I0913 00:12:18.437890 2079 memory_manager.go:354] "RemoveStaleState removing state" podUID="c84b9222-a5d9-455a-a33e-66e54977b741" containerName="cilium-agent" Sep 13 00:12:18.438549 kubelet[2079]: I0913 00:12:18.437897 2079 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3d9826-d8fc-43ec-9b03-0014d8e17d29" containerName="cilium-operator" Sep 13 00:12:18.442714 sshd[3843]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:18.442768 systemd[1]: Started sshd@22-10.0.0.44:22-10.0.0.1:44860.service. Sep 13 00:12:18.450646 systemd[1]: sshd@21-10.0.0.44:22-10.0.0.1:44846.service: Deactivated successfully. Sep 13 00:12:18.454479 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:12:18.457036 systemd-logind[1308]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:12:18.457864 systemd-logind[1308]: Removed session 22. Sep 13 00:12:18.488842 sshd[3856]: Accepted publickey for core from 10.0.0.1 port 44860 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:12:18.490075 sshd[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:12:18.493733 systemd-logind[1308]: New session 23 of user core. Sep 13 00:12:18.494283 systemd[1]: Started session-23.scope. Sep 13 00:12:18.511183 kubelet[2079]: I0913 00:12:18.511141 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8028450d-2c1d-445d-b446-11e8ed0d40db-clustermesh-secrets\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511183 kubelet[2079]: I0913 00:12:18.511180 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-host-proc-sys-kernel\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511336 kubelet[2079]: I0913 00:12:18.511211 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-etc-cni-netd\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511336 kubelet[2079]: I0913 00:12:18.511228 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-lib-modules\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511336 kubelet[2079]: I0913 00:12:18.511244 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-host-proc-sys-net\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511336 kubelet[2079]: I0913 00:12:18.511258 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-cni-path\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511336 kubelet[2079]: I0913 00:12:18.511275 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-ipsec-secrets\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511336 kubelet[2079]: I0913 00:12:18.511291 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55pvw\" (UniqueName: \"kubernetes.io/projected/8028450d-2c1d-445d-b446-11e8ed0d40db-kube-api-access-55pvw\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511469 kubelet[2079]: I0913 00:12:18.511308 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-bpf-maps\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511469 kubelet[2079]: I0913 00:12:18.511324 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-run\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511469 kubelet[2079]: I0913 00:12:18.511340 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-hostproc\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511469 kubelet[2079]: I0913 00:12:18.511356 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-cgroup\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511469 kubelet[2079]: I0913 00:12:18.511370 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-xtables-lock\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511469 kubelet[2079]: I0913 00:12:18.511386 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-config-path\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.511595 kubelet[2079]: I0913 00:12:18.511403 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8028450d-2c1d-445d-b446-11e8ed0d40db-hubble-tls\") pod \"cilium-n4hdw\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " pod="kube-system/cilium-n4hdw" Sep 13 00:12:18.629818 sshd[3856]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:18.636042 kubelet[2079]: E0913 00:12:18.636010 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:18.636865 systemd[1]: sshd@22-10.0.0.44:22-10.0.0.1:44860.service: Deactivated successfully. Sep 13 00:12:18.639103 env[1324]: time="2025-09-13T00:12:18.638067907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n4hdw,Uid:8028450d-2c1d-445d-b446-11e8ed0d40db,Namespace:kube-system,Attempt:0,}" Sep 13 00:12:18.642081 systemd[1]: Started sshd@23-10.0.0.44:22-10.0.0.1:44874.service. Sep 13 00:12:18.642560 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:12:18.645859 kubelet[2079]: E0913 00:12:18.644483 2079 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:12:18.645877 systemd-logind[1308]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:12:18.650414 systemd-logind[1308]: Removed session 23. Sep 13 00:12:18.657546 env[1324]: time="2025-09-13T00:12:18.657471521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:18.657660 env[1324]: time="2025-09-13T00:12:18.657523681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:18.657660 env[1324]: time="2025-09-13T00:12:18.657533961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:18.657811 env[1324]: time="2025-09-13T00:12:18.657780681Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3cdb3e767eb8289cb762160d89f8006840413bad37113e9b018bfbd43bf660d2 pid=3884 runtime=io.containerd.runc.v2 Sep 13 00:12:18.683651 sshd[3876]: Accepted publickey for core from 10.0.0.1 port 44874 ssh2: RSA SHA256:IYYmYtZT7fhBES8dcJq//ghMZv88JUKT/A8TkXgi+lY Sep 13 00:12:18.684958 sshd[3876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:12:18.694900 systemd-logind[1308]: New session 24 of user core. Sep 13 00:12:18.695837 systemd[1]: Started session-24.scope. Sep 13 00:12:18.707657 env[1324]: time="2025-09-13T00:12:18.707607718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n4hdw,Uid:8028450d-2c1d-445d-b446-11e8ed0d40db,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cdb3e767eb8289cb762160d89f8006840413bad37113e9b018bfbd43bf660d2\"" Sep 13 00:12:18.708522 kubelet[2079]: E0913 00:12:18.708500 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:18.711515 env[1324]: time="2025-09-13T00:12:18.711475401Z" level=info msg="CreateContainer within sandbox \"3cdb3e767eb8289cb762160d89f8006840413bad37113e9b018bfbd43bf660d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:12:18.722493 env[1324]: time="2025-09-13T00:12:18.722450849Z" level=info msg="CreateContainer within sandbox \"3cdb3e767eb8289cb762160d89f8006840413bad37113e9b018bfbd43bf660d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f8f995c7cb11f7f6401a993e35c7842f393c0e1e69305e8a92a879afc455ae20\"" Sep 13 00:12:18.723873 env[1324]: time="2025-09-13T00:12:18.723314690Z" level=info msg="StartContainer for \"f8f995c7cb11f7f6401a993e35c7842f393c0e1e69305e8a92a879afc455ae20\"" Sep 13 00:12:18.797570 env[1324]: time="2025-09-13T00:12:18.797512585Z" level=info msg="StartContainer for \"f8f995c7cb11f7f6401a993e35c7842f393c0e1e69305e8a92a879afc455ae20\" returns successfully" Sep 13 00:12:18.829522 env[1324]: time="2025-09-13T00:12:18.829475609Z" level=info msg="shim disconnected" id=f8f995c7cb11f7f6401a993e35c7842f393c0e1e69305e8a92a879afc455ae20 Sep 13 00:12:18.829522 env[1324]: time="2025-09-13T00:12:18.829524529Z" level=warning msg="cleaning up after shim disconnected" id=f8f995c7cb11f7f6401a993e35c7842f393c0e1e69305e8a92a879afc455ae20 namespace=k8s.io Sep 13 00:12:18.829753 env[1324]: time="2025-09-13T00:12:18.829534809Z" level=info msg="cleaning up dead shim" Sep 13 00:12:18.836233 env[1324]: time="2025-09-13T00:12:18.836164854Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:12:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3977 runtime=io.containerd.runc.v2\n" Sep 13 00:12:19.795848 env[1324]: time="2025-09-13T00:12:19.795803873Z" level=info msg="StopPodSandbox for \"3cdb3e767eb8289cb762160d89f8006840413bad37113e9b018bfbd43bf660d2\"" Sep 13 00:12:19.796258 env[1324]: time="2025-09-13T00:12:19.795865233Z" level=info msg="Container to stop \"f8f995c7cb11f7f6401a993e35c7842f393c0e1e69305e8a92a879afc455ae20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:12:19.797858 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cdb3e767eb8289cb762160d89f8006840413bad37113e9b018bfbd43bf660d2-shm.mount: Deactivated successfully. Sep 13 00:12:19.823997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cdb3e767eb8289cb762160d89f8006840413bad37113e9b018bfbd43bf660d2-rootfs.mount: Deactivated successfully. Sep 13 00:12:19.831733 env[1324]: time="2025-09-13T00:12:19.831682339Z" level=info msg="shim disconnected" id=3cdb3e767eb8289cb762160d89f8006840413bad37113e9b018bfbd43bf660d2 Sep 13 00:12:19.831733 env[1324]: time="2025-09-13T00:12:19.831731339Z" level=warning msg="cleaning up after shim disconnected" id=3cdb3e767eb8289cb762160d89f8006840413bad37113e9b018bfbd43bf660d2 namespace=k8s.io Sep 13 00:12:19.831733 env[1324]: time="2025-09-13T00:12:19.831740979Z" level=info msg="cleaning up dead shim" Sep 13 00:12:19.844996 env[1324]: time="2025-09-13T00:12:19.844950989Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:12:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4010 runtime=io.containerd.runc.v2\n" Sep 13 00:12:19.845311 env[1324]: time="2025-09-13T00:12:19.845288309Z" level=info msg="TearDown network for sandbox \"3cdb3e767eb8289cb762160d89f8006840413bad37113e9b018bfbd43bf660d2\" successfully" Sep 13 00:12:19.845349 env[1324]: time="2025-09-13T00:12:19.845312069Z" level=info msg="StopPodSandbox for \"3cdb3e767eb8289cb762160d89f8006840413bad37113e9b018bfbd43bf660d2\" returns successfully" Sep 13 00:12:19.919915 kubelet[2079]: I0913 00:12:19.919863 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-host-proc-sys-net\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920356 kubelet[2079]: I0913 00:12:19.919929 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-ipsec-secrets\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920356 kubelet[2079]: I0913 00:12:19.919955 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-cni-path\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920356 kubelet[2079]: I0913 00:12:19.919979 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8028450d-2c1d-445d-b446-11e8ed0d40db-clustermesh-secrets\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920356 kubelet[2079]: I0913 00:12:19.919996 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-lib-modules\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920356 kubelet[2079]: I0913 00:12:19.920010 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-xtables-lock\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920356 kubelet[2079]: I0913 00:12:19.920024 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-host-proc-sys-kernel\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920512 kubelet[2079]: I0913 00:12:19.920038 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-run\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920512 kubelet[2079]: I0913 00:12:19.920057 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8028450d-2c1d-445d-b446-11e8ed0d40db-hubble-tls\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920512 kubelet[2079]: I0913 00:12:19.920079 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55pvw\" (UniqueName: \"kubernetes.io/projected/8028450d-2c1d-445d-b446-11e8ed0d40db-kube-api-access-55pvw\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920512 kubelet[2079]: I0913 00:12:19.920094 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-bpf-maps\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920512 kubelet[2079]: I0913 00:12:19.920109 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-hostproc\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920512 kubelet[2079]: I0913 00:12:19.920125 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-etc-cni-netd\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920651 kubelet[2079]: I0913 00:12:19.920140 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-cgroup\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.920651 kubelet[2079]: I0913 00:12:19.920158 2079 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-config-path\") pod \"8028450d-2c1d-445d-b446-11e8ed0d40db\" (UID: \"8028450d-2c1d-445d-b446-11e8ed0d40db\") " Sep 13 00:12:19.921285 kubelet[2079]: I0913 00:12:19.920756 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:19.922578 kubelet[2079]: I0913 00:12:19.920792 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:19.922578 kubelet[2079]: I0913 00:12:19.920806 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:19.922578 kubelet[2079]: I0913 00:12:19.920815 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:19.922578 kubelet[2079]: I0913 00:12:19.920823 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-hostproc" (OuterVolumeSpecName: "hostproc") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:19.922578 kubelet[2079]: I0913 00:12:19.920836 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:19.922781 kubelet[2079]: I0913 00:12:19.920895 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:19.922781 kubelet[2079]: I0913 00:12:19.921494 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:19.922781 kubelet[2079]: I0913 00:12:19.921516 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:19.922781 kubelet[2079]: I0913 00:12:19.921719 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-cni-path" (OuterVolumeSpecName: "cni-path") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:12:19.922781 kubelet[2079]: I0913 00:12:19.922169 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:12:19.924919 systemd[1]: var-lib-kubelet-pods-8028450d\x2d2c1d\x2d445d\x2db446\x2d11e8ed0d40db-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:12:19.927342 systemd[1]: var-lib-kubelet-pods-8028450d\x2d2c1d\x2d445d\x2db446\x2d11e8ed0d40db-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:12:19.928291 kubelet[2079]: I0913 00:12:19.928243 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8028450d-2c1d-445d-b446-11e8ed0d40db-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:12:19.928584 kubelet[2079]: I0913 00:12:19.928552 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:12:19.929316 kubelet[2079]: I0913 00:12:19.929280 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8028450d-2c1d-445d-b446-11e8ed0d40db-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:12:19.929497 kubelet[2079]: I0913 00:12:19.929464 2079 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8028450d-2c1d-445d-b446-11e8ed0d40db-kube-api-access-55pvw" (OuterVolumeSpecName: "kube-api-access-55pvw") pod "8028450d-2c1d-445d-b446-11e8ed0d40db" (UID: "8028450d-2c1d-445d-b446-11e8ed0d40db"). InnerVolumeSpecName "kube-api-access-55pvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:12:20.020625 kubelet[2079]: I0913 00:12:20.020584 2079 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.020806 kubelet[2079]: I0913 00:12:20.020794 2079 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8028450d-2c1d-445d-b446-11e8ed0d40db-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.020871 kubelet[2079]: I0913 00:12:20.020861 2079 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.020943 kubelet[2079]: I0913 00:12:20.020915 2079 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55pvw\" (UniqueName: \"kubernetes.io/projected/8028450d-2c1d-445d-b446-11e8ed0d40db-kube-api-access-55pvw\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.021030 kubelet[2079]: I0913 00:12:20.021020 2079 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.021099 kubelet[2079]: I0913 00:12:20.021089 2079 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.021163 kubelet[2079]: I0913 00:12:20.021154 2079 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.021265 kubelet[2079]: I0913 00:12:20.021254 2079 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.021345 kubelet[2079]: I0913 00:12:20.021335 2079 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.021415 kubelet[2079]: I0913 00:12:20.021404 2079 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8028450d-2c1d-445d-b446-11e8ed0d40db-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.021481 kubelet[2079]: I0913 00:12:20.021472 2079 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.021542 kubelet[2079]: I0913 00:12:20.021526 2079 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.021604 kubelet[2079]: I0913 00:12:20.021595 2079 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.021665 kubelet[2079]: I0913 00:12:20.021649 2079 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8028450d-2c1d-445d-b446-11e8ed0d40db-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.021729 kubelet[2079]: I0913 00:12:20.021719 2079 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8028450d-2c1d-445d-b446-11e8ed0d40db-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:12:20.077503 kubelet[2079]: I0913 00:12:20.076045 2079 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:12:20Z","lastTransitionTime":"2025-09-13T00:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:12:20.617069 systemd[1]: var-lib-kubelet-pods-8028450d\x2d2c1d\x2d445d\x2db446\x2d11e8ed0d40db-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d55pvw.mount: Deactivated successfully. Sep 13 00:12:20.617264 systemd[1]: var-lib-kubelet-pods-8028450d\x2d2c1d\x2d445d\x2db446\x2d11e8ed0d40db-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:12:20.797530 kubelet[2079]: I0913 00:12:20.797491 2079 scope.go:117] "RemoveContainer" containerID="f8f995c7cb11f7f6401a993e35c7842f393c0e1e69305e8a92a879afc455ae20" Sep 13 00:12:20.801104 env[1324]: time="2025-09-13T00:12:20.800699352Z" level=info msg="RemoveContainer for \"f8f995c7cb11f7f6401a993e35c7842f393c0e1e69305e8a92a879afc455ae20\"" Sep 13 00:12:20.804037 env[1324]: time="2025-09-13T00:12:20.803936594Z" level=info msg="RemoveContainer for \"f8f995c7cb11f7f6401a993e35c7842f393c0e1e69305e8a92a879afc455ae20\" returns successfully" Sep 13 00:12:20.838663 kubelet[2079]: E0913 00:12:20.838615 2079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8028450d-2c1d-445d-b446-11e8ed0d40db" containerName="mount-cgroup" Sep 13 00:12:20.838823 kubelet[2079]: I0913 00:12:20.838677 2079 memory_manager.go:354] "RemoveStaleState removing state" podUID="8028450d-2c1d-445d-b446-11e8ed0d40db" containerName="mount-cgroup" Sep 13 00:12:20.927088 kubelet[2079]: I0913 00:12:20.926981 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78deb0af-c837-4942-96e9-dbf9675e56e3-lib-modules\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927088 kubelet[2079]: I0913 00:12:20.927023 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/78deb0af-c837-4942-96e9-dbf9675e56e3-cni-path\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927088 kubelet[2079]: I0913 00:12:20.927043 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78deb0af-c837-4942-96e9-dbf9675e56e3-xtables-lock\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927088 kubelet[2079]: I0913 00:12:20.927060 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/78deb0af-c837-4942-96e9-dbf9675e56e3-clustermesh-secrets\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927088 kubelet[2079]: I0913 00:12:20.927080 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ls2v\" (UniqueName: \"kubernetes.io/projected/78deb0af-c837-4942-96e9-dbf9675e56e3-kube-api-access-4ls2v\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927557 kubelet[2079]: I0913 00:12:20.927102 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/78deb0af-c837-4942-96e9-dbf9675e56e3-bpf-maps\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927557 kubelet[2079]: I0913 00:12:20.927120 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/78deb0af-c837-4942-96e9-dbf9675e56e3-cilium-ipsec-secrets\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927557 kubelet[2079]: I0913 00:12:20.927142 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/78deb0af-c837-4942-96e9-dbf9675e56e3-hubble-tls\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927557 kubelet[2079]: I0913 00:12:20.927158 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78deb0af-c837-4942-96e9-dbf9675e56e3-cilium-config-path\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927557 kubelet[2079]: I0913 00:12:20.927177 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/78deb0af-c837-4942-96e9-dbf9675e56e3-cilium-cgroup\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927557 kubelet[2079]: I0913 00:12:20.927204 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/78deb0af-c837-4942-96e9-dbf9675e56e3-etc-cni-netd\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927698 kubelet[2079]: I0913 00:12:20.927220 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/78deb0af-c837-4942-96e9-dbf9675e56e3-host-proc-sys-net\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927698 kubelet[2079]: I0913 00:12:20.927245 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/78deb0af-c837-4942-96e9-dbf9675e56e3-cilium-run\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927698 kubelet[2079]: I0913 00:12:20.927264 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/78deb0af-c837-4942-96e9-dbf9675e56e3-hostproc\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:20.927698 kubelet[2079]: I0913 00:12:20.927284 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/78deb0af-c837-4942-96e9-dbf9675e56e3-host-proc-sys-kernel\") pod \"cilium-c27h5\" (UID: \"78deb0af-c837-4942-96e9-dbf9675e56e3\") " pod="kube-system/cilium-c27h5" Sep 13 00:12:21.143180 kubelet[2079]: E0913 00:12:21.143147 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:21.143877 env[1324]: time="2025-09-13T00:12:21.143829274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c27h5,Uid:78deb0af-c837-4942-96e9-dbf9675e56e3,Namespace:kube-system,Attempt:0,}" Sep 13 00:12:21.157578 env[1324]: time="2025-09-13T00:12:21.157504283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:21.157735 env[1324]: time="2025-09-13T00:12:21.157545363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:21.157815 env[1324]: time="2025-09-13T00:12:21.157789083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:21.158153 env[1324]: time="2025-09-13T00:12:21.158115404Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/189155cfc43c64eb4d0daf5a4151a1dc3dbb6dcacb5e3d01d54d0cbad77f5163 pid=4036 runtime=io.containerd.runc.v2 Sep 13 00:12:21.197066 env[1324]: time="2025-09-13T00:12:21.196877791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c27h5,Uid:78deb0af-c837-4942-96e9-dbf9675e56e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"189155cfc43c64eb4d0daf5a4151a1dc3dbb6dcacb5e3d01d54d0cbad77f5163\"" Sep 13 00:12:21.197601 kubelet[2079]: E0913 00:12:21.197575 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:21.199736 env[1324]: time="2025-09-13T00:12:21.199702193Z" level=info msg="CreateContainer within sandbox \"189155cfc43c64eb4d0daf5a4151a1dc3dbb6dcacb5e3d01d54d0cbad77f5163\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:12:21.214321 env[1324]: time="2025-09-13T00:12:21.214243643Z" level=info msg="CreateContainer within sandbox \"189155cfc43c64eb4d0daf5a4151a1dc3dbb6dcacb5e3d01d54d0cbad77f5163\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c1e52d93e1ac41ebc6775e820fdfb1e075617cca8b31118c7ab8dbcf85a3ef6e\"" Sep 13 00:12:21.214884 env[1324]: time="2025-09-13T00:12:21.214848883Z" level=info msg="StartContainer for \"c1e52d93e1ac41ebc6775e820fdfb1e075617cca8b31118c7ab8dbcf85a3ef6e\"" Sep 13 00:12:21.270929 env[1324]: time="2025-09-13T00:12:21.270873322Z" level=info msg="StartContainer for \"c1e52d93e1ac41ebc6775e820fdfb1e075617cca8b31118c7ab8dbcf85a3ef6e\" returns successfully" Sep 13 00:12:21.302444 env[1324]: time="2025-09-13T00:12:21.302400184Z" level=info msg="shim disconnected" id=c1e52d93e1ac41ebc6775e820fdfb1e075617cca8b31118c7ab8dbcf85a3ef6e Sep 13 00:12:21.302706 env[1324]: time="2025-09-13T00:12:21.302689224Z" level=warning msg="cleaning up after shim disconnected" id=c1e52d93e1ac41ebc6775e820fdfb1e075617cca8b31118c7ab8dbcf85a3ef6e namespace=k8s.io Sep 13 00:12:21.302774 env[1324]: time="2025-09-13T00:12:21.302761024Z" level=info msg="cleaning up dead shim" Sep 13 00:12:21.309781 env[1324]: time="2025-09-13T00:12:21.309744029Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:12:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4120 runtime=io.containerd.runc.v2\n" Sep 13 00:12:21.802010 kubelet[2079]: E0913 00:12:21.801922 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:21.803477 env[1324]: time="2025-09-13T00:12:21.803437574Z" level=info msg="CreateContainer within sandbox \"189155cfc43c64eb4d0daf5a4151a1dc3dbb6dcacb5e3d01d54d0cbad77f5163\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:12:21.819667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1212309441.mount: Deactivated successfully. Sep 13 00:12:21.832488 env[1324]: time="2025-09-13T00:12:21.832437314Z" level=info msg="CreateContainer within sandbox \"189155cfc43c64eb4d0daf5a4151a1dc3dbb6dcacb5e3d01d54d0cbad77f5163\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6338fa7c5e1658f466584856b607de57cffdcae547da6efb8f839b5e26453230\"" Sep 13 00:12:21.833284 env[1324]: time="2025-09-13T00:12:21.833251794Z" level=info msg="StartContainer for \"6338fa7c5e1658f466584856b607de57cffdcae547da6efb8f839b5e26453230\"" Sep 13 00:12:21.901676 env[1324]: time="2025-09-13T00:12:21.901571562Z" level=info msg="StartContainer for \"6338fa7c5e1658f466584856b607de57cffdcae547da6efb8f839b5e26453230\" returns successfully" Sep 13 00:12:21.955038 env[1324]: time="2025-09-13T00:12:21.954953319Z" level=info msg="shim disconnected" id=6338fa7c5e1658f466584856b607de57cffdcae547da6efb8f839b5e26453230 Sep 13 00:12:21.955038 env[1324]: time="2025-09-13T00:12:21.955006079Z" level=warning msg="cleaning up after shim disconnected" id=6338fa7c5e1658f466584856b607de57cffdcae547da6efb8f839b5e26453230 namespace=k8s.io Sep 13 00:12:21.955038 env[1324]: time="2025-09-13T00:12:21.955015639Z" level=info msg="cleaning up dead shim" Sep 13 00:12:21.962476 env[1324]: time="2025-09-13T00:12:21.962423565Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:12:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4183 runtime=io.containerd.runc.v2\n" Sep 13 00:12:22.586743 kubelet[2079]: I0913 00:12:22.586665 2079 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8028450d-2c1d-445d-b446-11e8ed0d40db" path="/var/lib/kubelet/pods/8028450d-2c1d-445d-b446-11e8ed0d40db/volumes" Sep 13 00:12:22.617251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6338fa7c5e1658f466584856b607de57cffdcae547da6efb8f839b5e26453230-rootfs.mount: Deactivated successfully. Sep 13 00:12:22.808929 kubelet[2079]: E0913 00:12:22.807659 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:22.814101 env[1324]: time="2025-09-13T00:12:22.814063387Z" level=info msg="CreateContainer within sandbox \"189155cfc43c64eb4d0daf5a4151a1dc3dbb6dcacb5e3d01d54d0cbad77f5163\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:12:22.851561 env[1324]: time="2025-09-13T00:12:22.850391892Z" level=info msg="CreateContainer within sandbox \"189155cfc43c64eb4d0daf5a4151a1dc3dbb6dcacb5e3d01d54d0cbad77f5163\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b3a3705eca311cc5a702d3ed98bae892b91968b1eea97c8a395cfe918505cfdc\"" Sep 13 00:12:22.852527 env[1324]: time="2025-09-13T00:12:22.852495733Z" level=info msg="StartContainer for \"b3a3705eca311cc5a702d3ed98bae892b91968b1eea97c8a395cfe918505cfdc\"" Sep 13 00:12:22.913441 env[1324]: time="2025-09-13T00:12:22.913373255Z" level=info msg="StartContainer for \"b3a3705eca311cc5a702d3ed98bae892b91968b1eea97c8a395cfe918505cfdc\" returns successfully" Sep 13 00:12:22.941130 env[1324]: time="2025-09-13T00:12:22.941073314Z" level=info msg="shim disconnected" id=b3a3705eca311cc5a702d3ed98bae892b91968b1eea97c8a395cfe918505cfdc Sep 13 00:12:22.941332 env[1324]: time="2025-09-13T00:12:22.941127714Z" level=warning msg="cleaning up after shim disconnected" id=b3a3705eca311cc5a702d3ed98bae892b91968b1eea97c8a395cfe918505cfdc namespace=k8s.io Sep 13 00:12:22.941332 env[1324]: time="2025-09-13T00:12:22.941166794Z" level=info msg="cleaning up dead shim" Sep 13 00:12:22.955355 env[1324]: time="2025-09-13T00:12:22.955301644Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:12:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4239 runtime=io.containerd.runc.v2\n" Sep 13 00:12:23.617344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3a3705eca311cc5a702d3ed98bae892b91968b1eea97c8a395cfe918505cfdc-rootfs.mount: Deactivated successfully. Sep 13 00:12:23.645807 kubelet[2079]: E0913 00:12:23.645767 2079 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:12:23.814592 kubelet[2079]: E0913 00:12:23.813711 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:23.821428 env[1324]: time="2025-09-13T00:12:23.820815744Z" level=info msg="CreateContainer within sandbox \"189155cfc43c64eb4d0daf5a4151a1dc3dbb6dcacb5e3d01d54d0cbad77f5163\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:12:23.858574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731492420.mount: Deactivated successfully. Sep 13 00:12:23.863045 env[1324]: time="2025-09-13T00:12:23.863001453Z" level=info msg="CreateContainer within sandbox \"189155cfc43c64eb4d0daf5a4151a1dc3dbb6dcacb5e3d01d54d0cbad77f5163\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ea41ebfccbd7b312d88e06fc06c77f4b93656e634a3dc120932f461dd8a8b124\"" Sep 13 00:12:23.863240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081039867.mount: Deactivated successfully. Sep 13 00:12:23.864243 env[1324]: time="2025-09-13T00:12:23.864142773Z" level=info msg="StartContainer for \"ea41ebfccbd7b312d88e06fc06c77f4b93656e634a3dc120932f461dd8a8b124\"" Sep 13 00:12:23.919318 env[1324]: time="2025-09-13T00:12:23.919209130Z" level=info msg="StartContainer for \"ea41ebfccbd7b312d88e06fc06c77f4b93656e634a3dc120932f461dd8a8b124\" returns successfully" Sep 13 00:12:23.944249 env[1324]: time="2025-09-13T00:12:23.944190547Z" level=info msg="shim disconnected" id=ea41ebfccbd7b312d88e06fc06c77f4b93656e634a3dc120932f461dd8a8b124 Sep 13 00:12:23.944249 env[1324]: time="2025-09-13T00:12:23.944248427Z" level=warning msg="cleaning up after shim disconnected" id=ea41ebfccbd7b312d88e06fc06c77f4b93656e634a3dc120932f461dd8a8b124 namespace=k8s.io Sep 13 00:12:23.944249 env[1324]: time="2025-09-13T00:12:23.944257987Z" level=info msg="cleaning up dead shim" Sep 13 00:12:23.953062 env[1324]: time="2025-09-13T00:12:23.953020553Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:12:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4294 runtime=io.containerd.runc.v2\n" Sep 13 00:12:24.586059 kubelet[2079]: E0913 00:12:24.586027 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:24.617542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea41ebfccbd7b312d88e06fc06c77f4b93656e634a3dc120932f461dd8a8b124-rootfs.mount: Deactivated successfully. Sep 13 00:12:24.819314 kubelet[2079]: E0913 00:12:24.818691 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:24.823363 env[1324]: time="2025-09-13T00:12:24.823318325Z" level=info msg="CreateContainer within sandbox \"189155cfc43c64eb4d0daf5a4151a1dc3dbb6dcacb5e3d01d54d0cbad77f5163\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:12:24.840060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount857474249.mount: Deactivated successfully. Sep 13 00:12:24.850032 env[1324]: time="2025-09-13T00:12:24.849983263Z" level=info msg="CreateContainer within sandbox \"189155cfc43c64eb4d0daf5a4151a1dc3dbb6dcacb5e3d01d54d0cbad77f5163\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"67e89aed09063e5602e2f92afdefd245f1bdf2cc6e8a51ee59c1a88b50435510\"" Sep 13 00:12:24.853513 env[1324]: time="2025-09-13T00:12:24.852405704Z" level=info msg="StartContainer for \"67e89aed09063e5602e2f92afdefd245f1bdf2cc6e8a51ee59c1a88b50435510\"" Sep 13 00:12:24.921874 env[1324]: time="2025-09-13T00:12:24.921627070Z" level=info msg="StartContainer for \"67e89aed09063e5602e2f92afdefd245f1bdf2cc6e8a51ee59c1a88b50435510\" returns successfully" Sep 13 00:12:25.182216 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 13 00:12:25.822788 kubelet[2079]: E0913 00:12:25.822750 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:27.076184 systemd[1]: run-containerd-runc-k8s.io-67e89aed09063e5602e2f92afdefd245f1bdf2cc6e8a51ee59c1a88b50435510-runc.L97EdV.mount: Deactivated successfully. Sep 13 00:12:27.144860 kubelet[2079]: E0913 00:12:27.144808 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:27.582375 kubelet[2079]: E0913 00:12:27.582331 2079 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-jdg2h" podUID="1077a582-194b-4e23-b1a6-865e432434e9" Sep 13 00:12:28.106905 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:12:28.096318 systemd-networkd[1097]: lxc_health: Link UP Sep 13 00:12:28.106748 systemd-networkd[1097]: lxc_health: Gained carrier Sep 13 00:12:29.147147 kubelet[2079]: E0913 00:12:29.147098 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:29.176874 kubelet[2079]: I0913 00:12:29.176792 2079 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c27h5" podStartSLOduration=9.176776814 podStartE2EDuration="9.176776814s" podCreationTimestamp="2025-09-13 00:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:12:25.843995905 +0000 UTC m=+87.360511304" watchObservedRunningTime="2025-09-13 00:12:29.176776814 +0000 UTC m=+90.693292213" Sep 13 00:12:29.267585 systemd-networkd[1097]: lxc_health: Gained IPv6LL Sep 13 00:12:29.582868 kubelet[2079]: E0913 00:12:29.582805 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:29.830965 kubelet[2079]: E0913 00:12:29.830918 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:30.832753 kubelet[2079]: E0913 00:12:30.832721 2079 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:31.367404 systemd[1]: run-containerd-runc-k8s.io-67e89aed09063e5602e2f92afdefd245f1bdf2cc6e8a51ee59c1a88b50435510-runc.aJFl9L.mount: Deactivated successfully. Sep 13 00:12:33.570332 sshd[3876]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:33.573016 systemd[1]: sshd@23-10.0.0.44:22-10.0.0.1:44874.service: Deactivated successfully. Sep 13 00:12:33.573957 systemd-logind[1308]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:12:33.574006 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:12:33.574940 systemd-logind[1308]: Removed session 24.