Aug 13 00:01:33.915206 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:01:33.915232 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:01:33.915242 kernel: KASLR enabled Aug 13 00:01:33.915248 kernel: efi: EFI v2.7 by EDK II Aug 13 00:01:33.915254 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 13 00:01:33.915260 kernel: random: crng init done Aug 13 00:01:33.915268 kernel: ACPI: Early table checksum verification disabled Aug 13 00:01:33.915274 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 13 00:01:33.915280 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 13 00:01:33.915288 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:01:33.915295 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:01:33.915301 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:01:33.915308 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:01:33.915314 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:01:33.915322 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:01:33.915330 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:01:33.915337 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:01:33.915343 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:01:33.915350 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 13 00:01:33.915356 kernel: NUMA: Failed to initialise from firmware Aug 13 00:01:33.915372 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:01:33.915379 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Aug 13 00:01:33.915386 kernel: Zone ranges: Aug 13 00:01:33.915392 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:01:33.915399 kernel: DMA32 empty Aug 13 00:01:33.915408 kernel: Normal empty Aug 13 00:01:33.915416 kernel: Movable zone start for each node Aug 13 00:01:33.915423 kernel: Early memory node ranges Aug 13 00:01:33.915429 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 13 00:01:33.915452 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 13 00:01:33.915459 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 13 00:01:33.915466 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 13 00:01:33.915473 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 13 00:01:33.915480 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 13 00:01:33.915488 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 13 00:01:33.915494 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:01:33.915501 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 13 00:01:33.915509 kernel: psci: probing for conduit method from ACPI. Aug 13 00:01:33.915516 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:01:33.915523 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:01:33.915533 kernel: psci: Trusted OS migration not required Aug 13 00:01:33.915540 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:01:33.915548 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 13 00:01:33.915556 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:01:33.915563 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:01:33.915570 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 13 00:01:33.915577 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:01:33.915585 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:01:33.915592 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:01:33.915599 kernel: CPU features: detected: Spectre-v4 Aug 13 00:01:33.915606 kernel: CPU features: detected: Spectre-BHB Aug 13 00:01:33.915613 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:01:33.915621 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:01:33.915629 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:01:33.915637 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:01:33.915644 kernel: alternatives: applying boot alternatives Aug 13 00:01:33.915652 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:01:33.915659 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:01:33.915666 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:01:33.915674 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:01:33.915680 kernel: Fallback order for Node 0: 0 Aug 13 00:01:33.915687 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 13 00:01:33.915695 kernel: Policy zone: DMA Aug 13 00:01:33.915702 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:01:33.915711 kernel: software IO TLB: area num 4. Aug 13 00:01:33.915765 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 13 00:01:33.915775 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Aug 13 00:01:33.915785 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:01:33.915794 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:01:33.915803 kernel: rcu: RCU event tracing is enabled. Aug 13 00:01:33.915811 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:01:33.915818 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:01:33.915825 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:01:33.915833 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:01:33.915840 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:01:33.915850 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:01:33.915857 kernel: GICv3: 256 SPIs implemented Aug 13 00:01:33.915864 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:01:33.915871 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:01:33.915878 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 13 00:01:33.915885 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 13 00:01:33.915892 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 13 00:01:33.915899 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:01:33.915906 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:01:33.915914 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 13 00:01:33.915921 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 13 00:01:33.915928 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:01:33.915938 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:01:33.915946 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:01:33.915953 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:01:33.915961 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:01:33.915968 kernel: arm-pv: using stolen time PV Aug 13 00:01:33.915976 kernel: Console: colour dummy device 80x25 Aug 13 00:01:33.915983 kernel: ACPI: Core revision 20230628 Aug 13 00:01:33.915990 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:01:33.915998 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:01:33.916005 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:01:33.916014 kernel: landlock: Up and running. Aug 13 00:01:33.916022 kernel: SELinux: Initializing. Aug 13 00:01:33.916029 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:01:33.916037 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:01:33.916044 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:01:33.916052 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:01:33.916059 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:01:33.916066 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:01:33.916074 kernel: Platform MSI: ITS@0x8080000 domain created Aug 13 00:01:33.916083 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 13 00:01:33.916090 kernel: Remapping and enabling EFI services. Aug 13 00:01:33.916097 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:01:33.916105 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:01:33.916112 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 13 00:01:33.916120 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 13 00:01:33.916127 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:01:33.916134 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:01:33.916142 kernel: Detected PIPT I-cache on CPU2 Aug 13 00:01:33.916149 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 13 00:01:33.916159 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 13 00:01:33.916166 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:01:33.916179 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 13 00:01:33.916188 kernel: Detected PIPT I-cache on CPU3 Aug 13 00:01:33.916196 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 13 00:01:33.916204 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 13 00:01:33.916211 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:01:33.916219 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 13 00:01:33.916227 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:01:33.916236 kernel: SMP: Total of 4 processors activated. Aug 13 00:01:33.916244 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:01:33.916252 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:01:33.916259 kernel: CPU features: detected: Common not Private translations Aug 13 00:01:33.916267 kernel: CPU features: detected: CRC32 instructions Aug 13 00:01:33.916275 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 13 00:01:33.916283 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:01:33.916290 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:01:33.916300 kernel: CPU features: detected: Privileged Access Never Aug 13 00:01:33.916308 kernel: CPU features: detected: RAS Extension Support Aug 13 00:01:33.916316 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 13 00:01:33.916324 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:01:33.916331 kernel: alternatives: applying system-wide alternatives Aug 13 00:01:33.916339 kernel: devtmpfs: initialized Aug 13 00:01:33.916347 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:01:33.916355 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:01:33.916368 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:01:33.916378 kernel: SMBIOS 3.0.0 present. Aug 13 00:01:33.916386 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 13 00:01:33.916393 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:01:33.916401 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:01:33.916409 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:01:33.916417 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:01:33.916424 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:01:33.916432 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Aug 13 00:01:33.916453 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:01:33.916461 kernel: cpuidle: using governor menu Aug 13 00:01:33.916469 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:01:33.916477 kernel: ASID allocator initialised with 32768 entries Aug 13 00:01:33.916484 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:01:33.916492 kernel: Serial: AMBA PL011 UART driver Aug 13 00:01:33.916499 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 13 00:01:33.916507 kernel: Modules: 0 pages in range for non-PLT usage Aug 13 00:01:33.916514 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:01:33.916522 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:01:33.916531 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:01:33.916539 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:01:33.916546 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:01:33.916554 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:01:33.916562 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:01:33.916569 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:01:33.916577 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:01:33.916584 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:01:33.916592 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:01:33.916601 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:01:33.916609 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:01:33.916617 kernel: ACPI: Interpreter enabled Aug 13 00:01:33.916624 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:01:33.916632 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:01:33.916640 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:01:33.916647 kernel: printk: console [ttyAMA0] enabled Aug 13 00:01:33.916655 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:01:33.916853 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:01:33.916944 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:01:33.917017 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:01:33.917089 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 13 00:01:33.917163 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 13 00:01:33.917173 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 13 00:01:33.917182 kernel: PCI host bridge to bus 0000:00 Aug 13 00:01:33.917270 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 13 00:01:33.917342 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:01:33.917420 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 13 00:01:33.917507 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:01:33.917598 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 13 00:01:33.917680 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 00:01:33.917801 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 13 00:01:33.917887 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 13 00:01:33.917959 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:01:33.918030 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:01:33.918601 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 13 00:01:33.918704 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 13 00:01:33.918833 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 13 00:01:33.918902 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:01:33.918976 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 13 00:01:33.918987 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:01:33.918995 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:01:33.919003 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:01:33.919010 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:01:33.919018 kernel: iommu: Default domain type: Translated Aug 13 00:01:33.919026 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:01:33.919034 kernel: efivars: Registered efivars operations Aug 13 00:01:33.919044 kernel: vgaarb: loaded Aug 13 00:01:33.919052 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:01:33.919061 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:01:33.919069 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:01:33.919077 kernel: pnp: PnP ACPI init Aug 13 00:01:33.919179 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 13 00:01:33.919191 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:01:33.919199 kernel: NET: Registered PF_INET protocol family Aug 13 00:01:33.919208 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:01:33.919219 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:01:33.919227 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:01:33.919235 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:01:33.919243 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:01:33.919251 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:01:33.919258 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:01:33.919266 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:01:33.919274 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:01:33.919283 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:01:33.919291 kernel: kvm [1]: HYP mode not available Aug 13 00:01:33.919298 kernel: Initialise system trusted keyrings Aug 13 00:01:33.919306 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:01:33.919314 kernel: Key type asymmetric registered Aug 13 00:01:33.919321 kernel: Asymmetric key parser 'x509' registered Aug 13 00:01:33.919329 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:01:33.919337 kernel: io scheduler mq-deadline registered Aug 13 00:01:33.919344 kernel: io scheduler kyber registered Aug 13 00:01:33.919352 kernel: io scheduler bfq registered Aug 13 00:01:33.919369 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:01:33.919377 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:01:33.919386 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:01:33.919905 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 13 00:01:33.919927 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:01:33.919934 kernel: thunder_xcv, ver 1.0 Aug 13 00:01:33.919942 kernel: thunder_bgx, ver 1.0 Aug 13 00:01:33.919949 kernel: nicpf, ver 1.0 Aug 13 00:01:33.919957 kernel: nicvf, ver 1.0 Aug 13 00:01:33.920059 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:01:33.920124 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:01:33 UTC (1755043293) Aug 13 00:01:33.920135 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:01:33.920143 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 13 00:01:33.920151 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:01:33.920159 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:01:33.920167 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:01:33.920175 kernel: Segment Routing with IPv6 Aug 13 00:01:33.920185 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:01:33.920193 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:01:33.920200 kernel: Key type dns_resolver registered Aug 13 00:01:33.920208 kernel: registered taskstats version 1 Aug 13 00:01:33.920215 kernel: Loading compiled-in X.509 certificates Aug 13 00:01:33.920223 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:01:33.920231 kernel: Key type .fscrypt registered Aug 13 00:01:33.920238 kernel: Key type fscrypt-provisioning registered Aug 13 00:01:33.920246 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:01:33.920255 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:01:33.920263 kernel: ima: No architecture policies found Aug 13 00:01:33.920270 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:01:33.920278 kernel: clk: Disabling unused clocks Aug 13 00:01:33.920285 kernel: Freeing unused kernel memory: 39424K Aug 13 00:01:33.920293 kernel: Run /init as init process Aug 13 00:01:33.920301 kernel: with arguments: Aug 13 00:01:33.920309 kernel: /init Aug 13 00:01:33.920316 kernel: with environment: Aug 13 00:01:33.920325 kernel: HOME=/ Aug 13 00:01:33.920333 kernel: TERM=linux Aug 13 00:01:33.920340 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:01:33.920350 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:01:33.920370 systemd[1]: Detected virtualization kvm. Aug 13 00:01:33.920379 systemd[1]: Detected architecture arm64. Aug 13 00:01:33.920387 systemd[1]: Running in initrd. Aug 13 00:01:33.920397 systemd[1]: No hostname configured, using default hostname. Aug 13 00:01:33.920404 systemd[1]: Hostname set to . Aug 13 00:01:33.920413 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:01:33.920421 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:01:33.920429 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:01:33.920462 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:01:33.920472 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:01:33.920480 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:01:33.920491 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:01:33.920499 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:01:33.920509 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:01:33.920518 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:01:33.920526 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:01:33.920534 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:01:33.920548 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:01:33.920558 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:01:33.920566 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:01:33.920575 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:01:33.920583 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:01:33.920591 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:01:33.920599 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:01:33.920607 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:01:33.920621 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:01:33.920633 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:01:33.920645 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:01:33.920654 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:01:33.920663 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:01:33.920671 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:01:33.920680 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:01:33.920688 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:01:33.920696 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:01:33.920704 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:01:33.920719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:01:33.920728 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:01:33.920736 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:01:33.920744 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:01:33.920776 systemd-journald[237]: Collecting audit messages is disabled. Aug 13 00:01:33.920799 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:01:33.920808 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:01:33.920816 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:01:33.920826 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:01:33.920834 kernel: Bridge firewalling registered Aug 13 00:01:33.920842 systemd-journald[237]: Journal started Aug 13 00:01:33.920862 systemd-journald[237]: Runtime Journal (/run/log/journal/bfd38a4a8e904c4999bc409ff6ea00df) is 5.9M, max 47.3M, 41.4M free. Aug 13 00:01:33.902180 systemd-modules-load[239]: Inserted module 'overlay' Aug 13 00:01:33.923207 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:01:33.921864 systemd-modules-load[239]: Inserted module 'br_netfilter' Aug 13 00:01:33.925051 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:01:33.926249 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:01:33.931102 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:01:33.932783 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:01:33.936482 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:01:33.944209 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:01:33.945925 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:01:33.947540 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:01:33.949171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:01:33.962681 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:01:33.966636 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:01:33.976730 dracut-cmdline[274]: dracut-dracut-053 Aug 13 00:01:33.979557 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:01:34.003213 systemd-resolved[277]: Positive Trust Anchors: Aug 13 00:01:34.003233 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:01:34.003265 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:01:34.008727 systemd-resolved[277]: Defaulting to hostname 'linux'. Aug 13 00:01:34.010294 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:01:34.011284 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:01:34.051471 kernel: SCSI subsystem initialized Aug 13 00:01:34.056461 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:01:34.063463 kernel: iscsi: registered transport (tcp) Aug 13 00:01:34.079481 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:01:34.079544 kernel: QLogic iSCSI HBA Driver Aug 13 00:01:34.125016 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:01:34.137667 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:01:34.158816 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:01:34.158904 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:01:34.158916 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:01:34.210468 kernel: raid6: neonx8 gen() 10545 MB/s Aug 13 00:01:34.227452 kernel: raid6: neonx4 gen() 15525 MB/s Aug 13 00:01:34.244449 kernel: raid6: neonx2 gen() 13145 MB/s Aug 13 00:01:34.261450 kernel: raid6: neonx1 gen() 10466 MB/s Aug 13 00:01:34.278456 kernel: raid6: int64x8 gen() 6931 MB/s Aug 13 00:01:34.295451 kernel: raid6: int64x4 gen() 7209 MB/s Aug 13 00:01:34.312453 kernel: raid6: int64x2 gen() 6030 MB/s Aug 13 00:01:34.329459 kernel: raid6: int64x1 gen() 4766 MB/s Aug 13 00:01:34.329491 kernel: raid6: using algorithm neonx4 gen() 15525 MB/s Aug 13 00:01:34.346461 kernel: raid6: .... xor() 12281 MB/s, rmw enabled Aug 13 00:01:34.346477 kernel: raid6: using neon recovery algorithm Aug 13 00:01:34.351828 kernel: xor: measuring software checksum speed Aug 13 00:01:34.351852 kernel: 8regs : 19740 MB/sec Aug 13 00:01:34.352454 kernel: 32regs : 19074 MB/sec Aug 13 00:01:34.353452 kernel: arm64_neon : 25006 MB/sec Aug 13 00:01:34.353467 kernel: xor: using function: arm64_neon (25006 MB/sec) Aug 13 00:01:34.407518 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:01:34.420064 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:01:34.432672 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:01:34.446500 systemd-udevd[460]: Using default interface naming scheme 'v255'. Aug 13 00:01:34.450016 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:01:34.461672 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:01:34.474112 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Aug 13 00:01:34.508318 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:01:34.521719 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:01:34.566212 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:01:34.574651 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:01:34.591618 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:01:34.594779 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:01:34.596318 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:01:34.598650 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:01:34.607728 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:01:34.621683 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:01:34.625515 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 13 00:01:34.625896 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:01:34.631397 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:01:34.631466 kernel: GPT:9289727 != 19775487 Aug 13 00:01:34.631477 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:01:34.631487 kernel: GPT:9289727 != 19775487 Aug 13 00:01:34.631496 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:01:34.631513 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:01:34.634768 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:01:34.634891 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:01:34.642474 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:01:34.643605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:01:34.643842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:01:34.646057 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:01:34.654980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:01:34.665467 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (526) Aug 13 00:01:34.669635 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (510) Aug 13 00:01:34.670512 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:01:34.678231 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 00:01:34.683009 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 00:01:34.686727 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 00:01:34.687711 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 00:01:34.693241 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:01:34.709755 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:01:34.711548 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:01:34.721021 disk-uuid[550]: Primary Header is updated. Aug 13 00:01:34.721021 disk-uuid[550]: Secondary Entries is updated. Aug 13 00:01:34.721021 disk-uuid[550]: Secondary Header is updated. Aug 13 00:01:34.726480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:01:34.741585 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:01:35.741470 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:01:35.741532 disk-uuid[553]: The operation has completed successfully. Aug 13 00:01:35.777774 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:01:35.777907 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:01:35.793803 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:01:35.798970 sh[575]: Success Aug 13 00:01:35.818464 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:01:35.889157 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:01:35.891057 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:01:35.891971 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:01:35.904725 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:01:35.904793 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:01:35.904804 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:01:35.905481 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:01:35.906481 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:01:35.911208 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:01:35.912450 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:01:35.919699 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:01:35.921239 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:01:35.932666 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:01:35.932724 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:01:35.932742 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:01:35.936490 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 00:01:35.946956 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:01:35.949453 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:01:35.957270 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:01:35.963671 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:01:36.044935 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:01:36.059724 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:01:36.070400 ignition[670]: Ignition 2.19.0 Aug 13 00:01:36.070415 ignition[670]: Stage: fetch-offline Aug 13 00:01:36.070465 ignition[670]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:01:36.070474 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:01:36.070630 ignition[670]: parsed url from cmdline: "" Aug 13 00:01:36.070634 ignition[670]: no config URL provided Aug 13 00:01:36.070639 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:01:36.070646 ignition[670]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:01:36.070670 ignition[670]: op(1): [started] loading QEMU firmware config module Aug 13 00:01:36.070682 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:01:36.081128 ignition[670]: op(1): [finished] loading QEMU firmware config module Aug 13 00:01:36.085064 systemd-networkd[764]: lo: Link UP Aug 13 00:01:36.085077 systemd-networkd[764]: lo: Gained carrier Aug 13 00:01:36.085804 systemd-networkd[764]: Enumeration completed Aug 13 00:01:36.086025 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:01:36.086223 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:01:36.086226 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:01:36.087083 systemd-networkd[764]: eth0: Link UP Aug 13 00:01:36.087086 systemd-networkd[764]: eth0: Gained carrier Aug 13 00:01:36.087095 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:01:36.089329 systemd[1]: Reached target network.target - Network. Aug 13 00:01:36.108511 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:01:36.135142 ignition[670]: parsing config with SHA512: fb9a87b087a108bc82e647565830ecd46600c8fab0baee55cd713cacdfa6434889f68ac6aec126c0d53ef584ea63398757ac483cd9ca1e67b1ae210fc8897706 Aug 13 00:01:36.140351 unknown[670]: fetched base config from "system" Aug 13 00:01:36.140372 unknown[670]: fetched user config from "qemu" Aug 13 00:01:36.141144 ignition[670]: fetch-offline: fetch-offline passed Aug 13 00:01:36.141772 ignition[670]: Ignition finished successfully Aug 13 00:01:36.143562 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:01:36.145909 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:01:36.156829 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:01:36.169276 ignition[770]: Ignition 2.19.0 Aug 13 00:01:36.169288 ignition[770]: Stage: kargs Aug 13 00:01:36.169520 ignition[770]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:01:36.169530 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:01:36.170596 ignition[770]: kargs: kargs passed Aug 13 00:01:36.170650 ignition[770]: Ignition finished successfully Aug 13 00:01:36.172862 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:01:36.189721 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:01:36.200289 ignition[778]: Ignition 2.19.0 Aug 13 00:01:36.200301 ignition[778]: Stage: disks Aug 13 00:01:36.200523 ignition[778]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:01:36.200534 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:01:36.201525 ignition[778]: disks: disks passed Aug 13 00:01:36.203997 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:01:36.201578 ignition[778]: Ignition finished successfully Aug 13 00:01:36.205456 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:01:36.206737 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:01:36.207950 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:01:36.209346 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:01:36.211007 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:01:36.227687 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:01:36.241720 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:01:36.246674 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:01:36.267615 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:01:36.332473 kernel: EXT4-fs (vda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:01:36.333199 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:01:36.334425 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:01:36.345558 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:01:36.347595 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:01:36.348639 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:01:36.348689 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:01:36.348715 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:01:36.357454 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:01:36.360592 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (797) Aug 13 00:01:36.360618 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:01:36.360629 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:01:36.360638 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:01:36.360865 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:01:36.363466 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 00:01:36.365398 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:01:36.408041 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:01:36.412053 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:01:36.415952 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:01:36.421412 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:01:36.511218 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:01:36.520635 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:01:36.522160 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:01:36.531458 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:01:36.548227 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:01:36.559820 ignition[910]: INFO : Ignition 2.19.0 Aug 13 00:01:36.559820 ignition[910]: INFO : Stage: mount Aug 13 00:01:36.562174 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:01:36.562174 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:01:36.562174 ignition[910]: INFO : mount: mount passed Aug 13 00:01:36.562174 ignition[910]: INFO : Ignition finished successfully Aug 13 00:01:36.562784 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:01:36.575634 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:01:36.903880 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:01:36.920688 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:01:36.930483 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (924) Aug 13 00:01:36.933739 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:01:36.933795 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:01:36.933806 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:01:36.938700 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 00:01:36.940134 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:01:36.972060 ignition[941]: INFO : Ignition 2.19.0 Aug 13 00:01:36.972060 ignition[941]: INFO : Stage: files Aug 13 00:01:36.973610 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:01:36.973610 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:01:36.973610 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:01:36.977209 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:01:36.977209 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:01:36.980117 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:01:36.980117 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:01:36.980117 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:01:36.978976 unknown[941]: wrote ssh authorized keys file for user: core Aug 13 00:01:36.988357 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:01:36.988357 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:01:36.988357 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:01:36.988357 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 13 00:01:37.037604 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:01:37.380612 systemd-networkd[764]: eth0: Gained IPv6LL Aug 13 00:01:37.435262 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:01:37.435262 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:01:37.440099 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 13 00:01:37.678138 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Aug 13 00:01:37.782064 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:01:37.783642 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:01:37.783642 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:01:37.783642 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:01:37.783642 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:01:37.783642 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:01:37.783642 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:01:37.783642 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:01:37.783642 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:01:37.783642 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:01:37.798791 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:01:37.798791 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:01:37.798791 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:01:37.798791 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:01:37.798791 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 13 00:01:38.067745 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Aug 13 00:01:38.481608 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:01:38.481608 ignition[941]: INFO : files: op(d): [started] processing unit "containerd.service" Aug 13 00:01:38.486237 ignition[941]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:01:38.486237 ignition[941]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:01:38.486237 ignition[941]: INFO : files: op(d): [finished] processing unit "containerd.service" Aug 13 00:01:38.486237 ignition[941]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Aug 13 00:01:38.486237 ignition[941]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:01:38.486237 ignition[941]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:01:38.486237 ignition[941]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Aug 13 00:01:38.486237 ignition[941]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Aug 13 00:01:38.486237 ignition[941]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:01:38.486237 ignition[941]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:01:38.486237 ignition[941]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Aug 13 00:01:38.486237 ignition[941]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:01:38.519401 ignition[941]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:01:38.524357 ignition[941]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:01:38.525585 ignition[941]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:01:38.525585 ignition[941]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:01:38.525585 ignition[941]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:01:38.525585 ignition[941]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:01:38.525585 ignition[941]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:01:38.525585 ignition[941]: INFO : files: files passed Aug 13 00:01:38.525585 ignition[941]: INFO : Ignition finished successfully Aug 13 00:01:38.527198 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:01:38.539881 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:01:38.542830 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:01:38.544322 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:01:38.544457 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:01:38.550454 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 00:01:38.552721 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:01:38.552721 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:01:38.555505 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:01:38.557510 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:01:38.558707 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:01:38.567617 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:01:38.592020 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:01:38.592130 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:01:38.594492 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:01:38.596273 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:01:38.598135 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:01:38.599092 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:01:38.617294 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:01:38.628692 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:01:38.637080 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:01:38.638179 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:01:38.639717 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:01:38.641088 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:01:38.641218 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:01:38.646211 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:01:38.647756 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:01:38.648954 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:01:38.650239 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:01:38.651650 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:01:38.653074 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:01:38.654375 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:01:38.655800 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:01:38.657199 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:01:38.659014 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:01:38.660052 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:01:38.660190 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:01:38.661863 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:01:38.663231 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:01:38.664711 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:01:38.665537 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:01:38.666928 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:01:38.667056 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:01:38.669809 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:01:38.669937 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:01:38.671426 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:01:38.672681 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:01:38.672851 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:01:38.674171 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:01:38.675480 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:01:38.677060 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:01:38.677154 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:01:38.678322 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:01:38.678416 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:01:38.679583 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:01:38.679694 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:01:38.680960 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:01:38.681057 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:01:38.695672 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:01:38.697154 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:01:38.697851 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:01:38.697972 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:01:38.699432 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:01:38.699550 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:01:38.704057 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:01:38.704173 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:01:38.711025 ignition[996]: INFO : Ignition 2.19.0 Aug 13 00:01:38.711025 ignition[996]: INFO : Stage: umount Aug 13 00:01:38.711684 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:01:38.713965 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:01:38.713965 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:01:38.716413 ignition[996]: INFO : umount: umount passed Aug 13 00:01:38.716413 ignition[996]: INFO : Ignition finished successfully Aug 13 00:01:38.717331 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:01:38.717497 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:01:38.718651 systemd[1]: Stopped target network.target - Network. Aug 13 00:01:38.720609 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:01:38.720684 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:01:38.722098 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:01:38.722144 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:01:38.723284 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:01:38.723332 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:01:38.725625 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:01:38.725685 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:01:38.726783 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:01:38.728084 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:01:38.735952 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:01:38.736083 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:01:38.738731 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:01:38.738807 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:01:38.739499 systemd-networkd[764]: eth0: DHCPv6 lease lost Aug 13 00:01:38.741177 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:01:38.741321 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:01:38.742814 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:01:38.742845 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:01:38.748618 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:01:38.749298 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:01:38.749376 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:01:38.750921 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:01:38.750969 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:01:38.752341 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:01:38.752396 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:01:38.754053 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:01:38.764033 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:01:38.764150 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:01:38.767855 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:01:38.767990 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:01:38.769835 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:01:38.769897 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:01:38.771342 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:01:38.771389 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:01:38.772785 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:01:38.772847 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:01:38.775132 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:01:38.775197 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:01:38.777205 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:01:38.777252 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:01:38.789625 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:01:38.790499 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:01:38.790569 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:01:38.792167 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:01:38.792207 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:01:38.793667 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:01:38.793704 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:01:38.795332 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:01:38.795378 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:01:38.797195 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:01:38.797288 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:01:38.798698 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:01:38.798773 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:01:38.801551 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:01:38.803143 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:01:38.803213 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:01:38.805770 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:01:38.815684 systemd[1]: Switching root. Aug 13 00:01:38.845082 systemd-journald[237]: Journal stopped Aug 13 00:01:39.657665 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Aug 13 00:01:39.657731 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:01:39.657745 kernel: SELinux: policy capability open_perms=1 Aug 13 00:01:39.657754 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:01:39.657767 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:01:39.657777 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:01:39.657787 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:01:39.657798 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:01:39.657807 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:01:39.657817 kernel: audit: type=1403 audit(1755043299.059:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:01:39.657830 systemd[1]: Successfully loaded SELinux policy in 32.302ms. Aug 13 00:01:39.657851 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.925ms. Aug 13 00:01:39.657864 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:01:39.657875 systemd[1]: Detected virtualization kvm. Aug 13 00:01:39.657886 systemd[1]: Detected architecture arm64. Aug 13 00:01:39.657897 systemd[1]: Detected first boot. Aug 13 00:01:39.657910 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:01:39.657920 zram_generator::config[1061]: No configuration found. Aug 13 00:01:39.657934 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:01:39.657946 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:01:39.657957 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 00:01:39.657968 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:01:39.657979 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:01:39.657989 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:01:39.657999 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:01:39.658011 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:01:39.658022 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:01:39.658035 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:01:39.658047 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:01:39.658058 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:01:39.658069 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:01:39.658080 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:01:39.658091 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:01:39.658106 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:01:39.658117 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:01:39.658130 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 13 00:01:39.658143 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:01:39.658157 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:01:39.658167 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:01:39.658177 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:01:39.658188 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:01:39.658199 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:01:39.658209 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:01:39.658219 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:01:39.658232 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:01:39.658243 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:01:39.658254 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:01:39.658268 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:01:39.658279 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:01:39.658290 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:01:39.658300 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:01:39.658311 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:01:39.658322 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:01:39.658334 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:01:39.658356 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:01:39.658370 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:01:39.658381 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:01:39.658392 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:01:39.658404 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:01:39.658414 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:01:39.658425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:01:39.658467 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:01:39.658486 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:01:39.658497 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:01:39.658508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:01:39.658519 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:01:39.658529 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:01:39.658540 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:01:39.658551 kernel: fuse: init (API version 7.39) Aug 13 00:01:39.658561 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:01:39.658573 kernel: ACPI: bus type drm_connector registered Aug 13 00:01:39.658584 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:01:39.658594 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:01:39.658605 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:01:39.658615 kernel: loop: module loaded Aug 13 00:01:39.658625 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:01:39.658659 systemd-journald[1138]: Collecting audit messages is disabled. Aug 13 00:01:39.658681 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:01:39.658694 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:01:39.658707 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:01:39.658718 systemd-journald[1138]: Journal started Aug 13 00:01:39.658740 systemd-journald[1138]: Runtime Journal (/run/log/journal/bfd38a4a8e904c4999bc409ff6ea00df) is 5.9M, max 47.3M, 41.4M free. Aug 13 00:01:39.661505 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:01:39.661555 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:01:39.663919 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:01:39.665410 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:01:39.666786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:01:39.668071 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:01:39.668242 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:01:39.669512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:39.669672 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:01:39.670984 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:01:39.671148 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:01:39.672464 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:39.672637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:01:39.673817 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:01:39.673984 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:01:39.675152 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:39.677989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:01:39.679186 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:01:39.680645 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:01:39.681922 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:01:39.683312 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:01:39.695717 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:01:39.701611 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:01:39.703767 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:01:39.704653 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:01:39.710106 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:01:39.714484 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:01:39.715380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:01:39.717671 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:01:39.718668 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:01:39.720686 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:01:39.725710 systemd-journald[1138]: Time spent on flushing to /var/log/journal/bfd38a4a8e904c4999bc409ff6ea00df is 12.452ms for 847 entries. Aug 13 00:01:39.725710 systemd-journald[1138]: System Journal (/var/log/journal/bfd38a4a8e904c4999bc409ff6ea00df) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:01:39.928889 systemd-journald[1138]: Received client request to flush runtime journal. Aug 13 00:01:39.726955 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:01:39.729624 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:01:39.731046 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:01:39.732089 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:01:39.737719 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:01:39.749605 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:01:39.795217 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:01:39.796570 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Aug 13 00:01:39.796581 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Aug 13 00:01:39.800985 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:01:39.802591 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:01:39.804587 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:01:39.814652 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:01:39.842855 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:01:39.851635 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:01:39.865834 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Aug 13 00:01:39.865846 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Aug 13 00:01:39.870022 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:01:39.932014 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:01:40.263314 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:01:40.274634 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:01:40.299078 systemd-udevd[1222]: Using default interface naming scheme 'v255'. Aug 13 00:01:40.316242 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:01:40.331783 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:01:40.337842 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:01:40.344909 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Aug 13 00:01:40.365766 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1237) Aug 13 00:01:40.403806 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:01:40.404990 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:01:40.468933 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:01:40.478657 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:01:40.481539 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:01:40.491080 systemd-networkd[1229]: lo: Link UP Aug 13 00:01:40.491092 systemd-networkd[1229]: lo: Gained carrier Aug 13 00:01:40.491860 systemd-networkd[1229]: Enumeration completed Aug 13 00:01:40.495310 systemd-networkd[1229]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:01:40.495322 systemd-networkd[1229]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:01:40.495592 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:01:40.498546 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:01:40.501786 systemd-networkd[1229]: eth0: Link UP Aug 13 00:01:40.501795 systemd-networkd[1229]: eth0: Gained carrier Aug 13 00:01:40.501814 systemd-networkd[1229]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:01:40.506834 lvm[1259]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:01:40.523511 systemd-networkd[1229]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:01:40.537660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:01:40.545591 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:01:40.547413 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:01:40.556684 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:01:40.562025 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:01:40.600055 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:01:40.601225 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:01:40.602194 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:01:40.602225 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:01:40.602984 systemd[1]: Reached target machines.target - Containers. Aug 13 00:01:40.604785 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:01:40.622628 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:01:40.624794 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:01:40.625783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:01:40.627646 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:01:40.631949 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:01:40.636689 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:01:40.638578 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:01:40.646124 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:01:40.652468 kernel: loop0: detected capacity change from 0 to 114328 Aug 13 00:01:40.664130 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:01:40.665615 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:01:40.668180 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:01:40.700471 kernel: loop1: detected capacity change from 0 to 203944 Aug 13 00:01:40.736621 kernel: loop2: detected capacity change from 0 to 114432 Aug 13 00:01:40.785453 kernel: loop3: detected capacity change from 0 to 114328 Aug 13 00:01:40.790484 kernel: loop4: detected capacity change from 0 to 203944 Aug 13 00:01:40.796467 kernel: loop5: detected capacity change from 0 to 114432 Aug 13 00:01:40.800046 (sd-merge)[1290]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 00:01:40.800480 (sd-merge)[1290]: Merged extensions into '/usr'. Aug 13 00:01:40.803963 systemd[1]: Reloading requested from client PID 1276 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:01:40.803981 systemd[1]: Reloading... Aug 13 00:01:40.854413 zram_generator::config[1319]: No configuration found. Aug 13 00:01:40.924499 ldconfig[1272]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:01:40.947475 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:01:40.990408 systemd[1]: Reloading finished in 186 ms. Aug 13 00:01:41.008271 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:01:41.009458 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:01:41.024639 systemd[1]: Starting ensure-sysext.service... Aug 13 00:01:41.026557 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:01:41.030097 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:01:41.030113 systemd[1]: Reloading... Aug 13 00:01:41.044808 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:01:41.045069 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:01:41.045715 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:01:41.045931 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Aug 13 00:01:41.045981 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Aug 13 00:01:41.048539 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:01:41.048551 systemd-tmpfiles[1361]: Skipping /boot Aug 13 00:01:41.058757 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:01:41.058774 systemd-tmpfiles[1361]: Skipping /boot Aug 13 00:01:41.070482 zram_generator::config[1389]: No configuration found. Aug 13 00:01:41.167309 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:01:41.210264 systemd[1]: Reloading finished in 179 ms. Aug 13 00:01:41.224400 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:01:41.240666 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:01:41.244613 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:01:41.247228 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:01:41.252657 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:01:41.254699 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:01:41.271817 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:01:41.287364 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:01:41.290005 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:01:41.294132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:01:41.300770 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:01:41.305735 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:01:41.306742 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:01:41.311819 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:01:41.314785 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:01:41.316216 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:41.316404 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:01:41.323578 augenrules[1464]: No rules Aug 13 00:01:41.326735 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:41.326902 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:01:41.328717 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:01:41.330040 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:41.332064 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:01:41.336722 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:01:41.342356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:01:41.352817 systemd-resolved[1435]: Positive Trust Anchors: Aug 13 00:01:41.352837 systemd-resolved[1435]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:01:41.352870 systemd-resolved[1435]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:01:41.353732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:01:41.356234 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:01:41.360706 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:01:41.363727 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:01:41.365964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:01:41.366264 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:01:41.367548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:41.367815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:01:41.369261 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:01:41.369555 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:01:41.370862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:41.371087 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:01:41.372127 systemd-resolved[1435]: Defaulting to hostname 'linux'. Aug 13 00:01:41.374108 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:41.374645 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:01:41.376501 systemd[1]: Finished ensure-sysext.service. Aug 13 00:01:41.384148 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:01:41.385528 systemd[1]: Reached target network.target - Network. Aug 13 00:01:41.386363 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:01:41.387335 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:01:41.387452 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:01:41.390332 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:01:41.443217 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:01:41.444246 systemd-timesyncd[1493]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:01:41.444303 systemd-timesyncd[1493]: Initial clock synchronization to Wed 2025-08-13 00:01:41.364413 UTC. Aug 13 00:01:41.444777 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:01:41.445690 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:01:41.446652 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:01:41.447590 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:01:41.448532 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:01:41.448568 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:01:41.449242 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:01:41.450190 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:01:41.451135 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:01:41.452481 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:01:41.454561 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:01:41.457185 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:01:41.459024 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:01:41.466548 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:01:41.467375 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:01:41.468155 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:01:41.469033 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:01:41.469084 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:01:41.469107 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:01:41.470322 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:01:41.472288 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:01:41.474081 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:01:41.478626 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:01:41.479473 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:01:41.481381 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:01:41.486935 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:01:41.490091 jq[1499]: false Aug 13 00:01:41.492323 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:01:41.495295 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:01:41.498475 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:01:41.505824 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:01:41.508008 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:01:41.510270 extend-filesystems[1500]: Found loop3 Aug 13 00:01:41.510270 extend-filesystems[1500]: Found loop4 Aug 13 00:01:41.510270 extend-filesystems[1500]: Found loop5 Aug 13 00:01:41.510270 extend-filesystems[1500]: Found vda Aug 13 00:01:41.510270 extend-filesystems[1500]: Found vda1 Aug 13 00:01:41.510270 extend-filesystems[1500]: Found vda2 Aug 13 00:01:41.510270 extend-filesystems[1500]: Found vda3 Aug 13 00:01:41.510270 extend-filesystems[1500]: Found usr Aug 13 00:01:41.510270 extend-filesystems[1500]: Found vda4 Aug 13 00:01:41.510270 extend-filesystems[1500]: Found vda6 Aug 13 00:01:41.510270 extend-filesystems[1500]: Found vda7 Aug 13 00:01:41.510270 extend-filesystems[1500]: Found vda9 Aug 13 00:01:41.510270 extend-filesystems[1500]: Checking size of /dev/vda9 Aug 13 00:01:41.532200 dbus-daemon[1498]: [system] SELinux support is enabled Aug 13 00:01:41.514859 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:01:41.519880 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:01:41.520117 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:01:41.540829 jq[1519]: true Aug 13 00:01:41.520394 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:01:41.520606 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:01:41.529902 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:01:41.530122 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:01:41.533900 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:01:41.548314 jq[1527]: true Aug 13 00:01:41.567510 extend-filesystems[1500]: Resized partition /dev/vda9 Aug 13 00:01:41.573789 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1224) Aug 13 00:01:41.589505 tar[1526]: linux-arm64/helm Aug 13 00:01:41.586771 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:01:41.591612 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:01:41.591763 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:01:41.593038 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:01:41.593168 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:01:41.597050 extend-filesystems[1543]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:01:41.611484 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:01:41.616782 systemd-logind[1511]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:01:41.617739 systemd-logind[1511]: New seat seat0. Aug 13 00:01:41.621759 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:01:41.630047 update_engine[1516]: I20250813 00:01:41.629819 1516 main.cc:92] Flatcar Update Engine starting Aug 13 00:01:41.634384 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:01:41.637597 update_engine[1516]: I20250813 00:01:41.634737 1516 update_check_scheduler.cc:74] Next update check in 7m27s Aug 13 00:01:41.637091 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:01:41.645387 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:01:41.657455 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:01:41.671081 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:01:41.671081 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:01:41.671081 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:01:41.677397 extend-filesystems[1500]: Resized filesystem in /dev/vda9 Aug 13 00:01:41.676925 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:01:41.681072 bash[1558]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:01:41.677195 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:01:41.680229 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:01:41.685213 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 00:01:41.697127 locksmithd[1559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:01:41.796552 systemd-networkd[1229]: eth0: Gained IPv6LL Aug 13 00:01:41.803690 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:01:41.806591 containerd[1536]: time="2025-08-13T00:01:41.803636920Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:01:41.813052 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:01:41.828938 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 00:01:41.831535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:41.833860 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:01:41.857020 containerd[1536]: time="2025-08-13T00:01:41.856977360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:41.863638 containerd[1536]: time="2025-08-13T00:01:41.863355960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:41.863638 containerd[1536]: time="2025-08-13T00:01:41.863407520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:01:41.863638 containerd[1536]: time="2025-08-13T00:01:41.863427160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:01:41.863638 containerd[1536]: time="2025-08-13T00:01:41.863613160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:01:41.863638 containerd[1536]: time="2025-08-13T00:01:41.863637360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:41.863817 containerd[1536]: time="2025-08-13T00:01:41.863692240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:41.863817 containerd[1536]: time="2025-08-13T00:01:41.863707680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:41.864081 containerd[1536]: time="2025-08-13T00:01:41.863907160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:41.864081 containerd[1536]: time="2025-08-13T00:01:41.863931080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:41.864081 containerd[1536]: time="2025-08-13T00:01:41.863944560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:41.864081 containerd[1536]: time="2025-08-13T00:01:41.863954520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:41.864081 containerd[1536]: time="2025-08-13T00:01:41.864026400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:41.864405 containerd[1536]: time="2025-08-13T00:01:41.864210400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:41.864405 containerd[1536]: time="2025-08-13T00:01:41.864353160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:41.864405 containerd[1536]: time="2025-08-13T00:01:41.864368800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:01:41.864494 containerd[1536]: time="2025-08-13T00:01:41.864461240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:01:41.864513 containerd[1536]: time="2025-08-13T00:01:41.864505840Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:01:41.870243 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 00:01:41.871277 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 00:01:41.873830 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:01:41.874685 containerd[1536]: time="2025-08-13T00:01:41.874597440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:01:41.874746 containerd[1536]: time="2025-08-13T00:01:41.874688240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:01:41.874746 containerd[1536]: time="2025-08-13T00:01:41.874708320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:01:41.874746 containerd[1536]: time="2025-08-13T00:01:41.874726320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:01:41.874746 containerd[1536]: time="2025-08-13T00:01:41.874741240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:01:41.875411 containerd[1536]: time="2025-08-13T00:01:41.874896240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:01:41.876896 containerd[1536]: time="2025-08-13T00:01:41.876831960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:01:41.877102 containerd[1536]: time="2025-08-13T00:01:41.877074440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:01:41.877102 containerd[1536]: time="2025-08-13T00:01:41.877102680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:01:41.877102 containerd[1536]: time="2025-08-13T00:01:41.877117560Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:01:41.877102 containerd[1536]: time="2025-08-13T00:01:41.877135160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877148880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877163240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877177120Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877191720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877204360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877217560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877229640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877257400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877272440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877284720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877308000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877320680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877345240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.877452 containerd[1536]: time="2025-08-13T00:01:41.877359880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.878007 containerd[1536]: time="2025-08-13T00:01:41.877373840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.878007 containerd[1536]: time="2025-08-13T00:01:41.877387000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.878007 containerd[1536]: time="2025-08-13T00:01:41.877408040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.878007 containerd[1536]: time="2025-08-13T00:01:41.877420880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.878007 containerd[1536]: time="2025-08-13T00:01:41.877446560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.878007 containerd[1536]: time="2025-08-13T00:01:41.877473880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.878007 containerd[1536]: time="2025-08-13T00:01:41.877494760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:01:41.878007 containerd[1536]: time="2025-08-13T00:01:41.877517840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.878007 containerd[1536]: time="2025-08-13T00:01:41.877529920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.878007 containerd[1536]: time="2025-08-13T00:01:41.877540680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:01:41.878007 containerd[1536]: time="2025-08-13T00:01:41.877653960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:01:41.878007 containerd[1536]: time="2025-08-13T00:01:41.877674560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:01:41.878007 containerd[1536]: time="2025-08-13T00:01:41.877686480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:01:41.877895 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:01:41.878287 containerd[1536]: time="2025-08-13T00:01:41.877698080Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:01:41.878287 containerd[1536]: time="2025-08-13T00:01:41.877708080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.878287 containerd[1536]: time="2025-08-13T00:01:41.877720160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:01:41.878287 containerd[1536]: time="2025-08-13T00:01:41.877730600Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:01:41.878287 containerd[1536]: time="2025-08-13T00:01:41.877740600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:01:41.878391 containerd[1536]: time="2025-08-13T00:01:41.878097520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:01:41.878391 containerd[1536]: time="2025-08-13T00:01:41.878154880Z" level=info msg="Connect containerd service" Aug 13 00:01:41.878391 containerd[1536]: time="2025-08-13T00:01:41.878251880Z" level=info msg="using legacy CRI server" Aug 13 00:01:41.878391 containerd[1536]: time="2025-08-13T00:01:41.878258920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:01:41.878391 containerd[1536]: time="2025-08-13T00:01:41.878355720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:01:41.879272 containerd[1536]: time="2025-08-13T00:01:41.878980360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:01:41.880263 containerd[1536]: time="2025-08-13T00:01:41.880060600Z" level=info msg="Start subscribing containerd event" Aug 13 00:01:41.880263 containerd[1536]: time="2025-08-13T00:01:41.880124200Z" level=info msg="Start recovering state" Aug 13 00:01:41.880263 containerd[1536]: time="2025-08-13T00:01:41.880200760Z" level=info msg="Start event monitor" Aug 13 00:01:41.880263 containerd[1536]: time="2025-08-13T00:01:41.880213720Z" level=info msg="Start snapshots syncer" Aug 13 00:01:41.880263 containerd[1536]: time="2025-08-13T00:01:41.880223720Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:01:41.880263 containerd[1536]: time="2025-08-13T00:01:41.880231600Z" level=info msg="Start streaming server" Aug 13 00:01:41.881153 containerd[1536]: time="2025-08-13T00:01:41.880997800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:01:41.881153 containerd[1536]: time="2025-08-13T00:01:41.881066440Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:01:41.881232 containerd[1536]: time="2025-08-13T00:01:41.881219440Z" level=info msg="containerd successfully booted in 0.078654s" Aug 13 00:01:41.881307 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:01:41.994670 tar[1526]: linux-arm64/LICENSE Aug 13 00:01:41.994670 tar[1526]: linux-arm64/README.md Aug 13 00:01:42.007722 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:01:42.051155 sshd_keygen[1525]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:01:42.071977 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:01:42.085093 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:01:42.091513 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:01:42.091776 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:01:42.095025 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:01:42.111057 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:01:42.123783 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:01:42.126079 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 13 00:01:42.127325 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:01:42.460282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:42.461640 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:01:42.465566 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:01:42.467538 systemd[1]: Startup finished in 5.964s (kernel) + 3.442s (userspace) = 9.407s. Aug 13 00:01:42.966778 kubelet[1635]: E0813 00:01:42.966716 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:42.969341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:42.969593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:46.814032 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:01:46.827732 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:60836.service - OpenSSH per-connection server daemon (10.0.0.1:60836). Aug 13 00:01:46.899710 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 60836 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:01:46.901839 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:46.920560 systemd-logind[1511]: New session 1 of user core. Aug 13 00:01:46.922376 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:01:46.935761 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:01:46.948631 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:01:46.954380 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:01:46.957804 (systemd)[1654]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:01:47.047765 systemd[1654]: Queued start job for default target default.target. Aug 13 00:01:47.048165 systemd[1654]: Created slice app.slice - User Application Slice. Aug 13 00:01:47.048183 systemd[1654]: Reached target paths.target - Paths. Aug 13 00:01:47.048195 systemd[1654]: Reached target timers.target - Timers. Aug 13 00:01:47.056573 systemd[1654]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:01:47.082326 systemd[1654]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:01:47.082401 systemd[1654]: Reached target sockets.target - Sockets. Aug 13 00:01:47.082413 systemd[1654]: Reached target basic.target - Basic System. Aug 13 00:01:47.082479 systemd[1654]: Reached target default.target - Main User Target. Aug 13 00:01:47.082512 systemd[1654]: Startup finished in 118ms. Aug 13 00:01:47.085710 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:01:47.103860 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:01:47.169026 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:60846.service - OpenSSH per-connection server daemon (10.0.0.1:60846). Aug 13 00:01:47.212752 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 60846 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:01:47.214342 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:47.221333 systemd-logind[1511]: New session 2 of user core. Aug 13 00:01:47.234796 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:01:47.293370 sshd[1666]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:47.311805 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:60854.service - OpenSSH per-connection server daemon (10.0.0.1:60854). Aug 13 00:01:47.312214 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:60846.service: Deactivated successfully. Aug 13 00:01:47.314555 systemd-logind[1511]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:01:47.314991 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:01:47.323694 systemd-logind[1511]: Removed session 2. Aug 13 00:01:47.349939 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 60854 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:01:47.351998 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:47.358107 systemd-logind[1511]: New session 3 of user core. Aug 13 00:01:47.367864 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:01:47.422891 sshd[1671]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:47.430388 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:60866.service - OpenSSH per-connection server daemon (10.0.0.1:60866). Aug 13 00:01:47.434239 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:60854.service: Deactivated successfully. Aug 13 00:01:47.444221 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:01:47.444988 systemd-logind[1511]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:01:47.446346 systemd-logind[1511]: Removed session 3. Aug 13 00:01:47.473219 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 60866 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:01:47.475067 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:47.480273 systemd-logind[1511]: New session 4 of user core. Aug 13 00:01:47.490778 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:01:47.547508 sshd[1679]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:47.559731 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:60870.service - OpenSSH per-connection server daemon (10.0.0.1:60870). Aug 13 00:01:47.560232 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:60866.service: Deactivated successfully. Aug 13 00:01:47.562267 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:01:47.563080 systemd-logind[1511]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:01:47.564922 systemd-logind[1511]: Removed session 4. Aug 13 00:01:47.589143 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 60870 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:01:47.590633 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:47.595385 systemd-logind[1511]: New session 5 of user core. Aug 13 00:01:47.617803 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:01:47.688175 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:01:47.688992 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:01:47.712609 sudo[1694]: pam_unix(sudo:session): session closed for user root Aug 13 00:01:47.715893 sshd[1688]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:47.724897 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:60876.service - OpenSSH per-connection server daemon (10.0.0.1:60876). Aug 13 00:01:47.725767 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:60870.service: Deactivated successfully. Aug 13 00:01:47.727556 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:01:47.729761 systemd-logind[1511]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:01:47.731721 systemd-logind[1511]: Removed session 5. Aug 13 00:01:47.759494 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 60876 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:01:47.760963 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:47.765836 systemd-logind[1511]: New session 6 of user core. Aug 13 00:01:47.781829 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:01:47.835390 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:01:47.836322 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:01:47.840177 sudo[1704]: pam_unix(sudo:session): session closed for user root Aug 13 00:01:47.845843 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:01:47.846135 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:01:47.868781 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:01:47.870273 auditctl[1707]: No rules Aug 13 00:01:47.872124 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:01:47.872421 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:01:47.874816 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:01:47.902750 augenrules[1726]: No rules Aug 13 00:01:47.904265 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:01:47.905410 sudo[1703]: pam_unix(sudo:session): session closed for user root Aug 13 00:01:47.907962 sshd[1696]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:47.932802 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:60890.service - OpenSSH per-connection server daemon (10.0.0.1:60890). Aug 13 00:01:47.933328 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:60876.service: Deactivated successfully. Aug 13 00:01:47.934878 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:01:47.935681 systemd-logind[1511]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:01:47.937084 systemd-logind[1511]: Removed session 6. Aug 13 00:01:47.964284 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 60890 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:01:47.965769 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:47.970491 systemd-logind[1511]: New session 7 of user core. Aug 13 00:01:47.988811 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:01:48.041648 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:01:48.041923 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:01:48.420778 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:01:48.420935 (dockerd)[1758]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:01:48.718463 dockerd[1758]: time="2025-08-13T00:01:48.718255171Z" level=info msg="Starting up" Aug 13 00:01:49.021951 systemd[1]: var-lib-docker-metacopy\x2dcheck3031549663-merged.mount: Deactivated successfully. Aug 13 00:01:49.031940 dockerd[1758]: time="2025-08-13T00:01:49.031887121Z" level=info msg="Loading containers: start." Aug 13 00:01:49.202749 kernel: Initializing XFRM netlink socket Aug 13 00:01:49.303951 systemd-networkd[1229]: docker0: Link UP Aug 13 00:01:49.324473 dockerd[1758]: time="2025-08-13T00:01:49.324218994Z" level=info msg="Loading containers: done." Aug 13 00:01:49.340619 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck149324049-merged.mount: Deactivated successfully. Aug 13 00:01:49.342732 dockerd[1758]: time="2025-08-13T00:01:49.342369837Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:01:49.342732 dockerd[1758]: time="2025-08-13T00:01:49.342502516Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:01:49.342732 dockerd[1758]: time="2025-08-13T00:01:49.342620811Z" level=info msg="Daemon has completed initialization" Aug 13 00:01:49.398963 dockerd[1758]: time="2025-08-13T00:01:49.398816173Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:01:49.399192 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:01:50.081868 containerd[1536]: time="2025-08-13T00:01:50.081808036Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:01:50.777561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938292599.mount: Deactivated successfully. Aug 13 00:01:51.794375 containerd[1536]: time="2025-08-13T00:01:51.794264584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:51.796332 containerd[1536]: time="2025-08-13T00:01:51.796290615Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651815" Aug 13 00:01:51.797486 containerd[1536]: time="2025-08-13T00:01:51.797456352Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:51.801681 containerd[1536]: time="2025-08-13T00:01:51.801633597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:51.802676 containerd[1536]: time="2025-08-13T00:01:51.802632634Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 1.720778234s" Aug 13 00:01:51.802736 containerd[1536]: time="2025-08-13T00:01:51.802675665Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 13 00:01:51.815355 containerd[1536]: time="2025-08-13T00:01:51.815306384Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:01:52.839547 containerd[1536]: time="2025-08-13T00:01:52.839492980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:52.843445 containerd[1536]: time="2025-08-13T00:01:52.843403939Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460285" Aug 13 00:01:52.844689 containerd[1536]: time="2025-08-13T00:01:52.844618226Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:52.847075 containerd[1536]: time="2025-08-13T00:01:52.847043930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:52.848352 containerd[1536]: time="2025-08-13T00:01:52.848230570Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 1.032875609s" Aug 13 00:01:52.848352 containerd[1536]: time="2025-08-13T00:01:52.848266077Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 13 00:01:52.849141 containerd[1536]: time="2025-08-13T00:01:52.849102917Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:01:53.219829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:01:53.230646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:53.353512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:53.360189 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:01:53.411931 kubelet[1975]: E0813 00:01:53.411865 1975 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:53.414237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:53.414409 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:53.923005 containerd[1536]: time="2025-08-13T00:01:53.922940997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:53.927853 containerd[1536]: time="2025-08-13T00:01:53.927810280Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125091" Aug 13 00:01:53.929174 containerd[1536]: time="2025-08-13T00:01:53.929112565Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:53.932143 containerd[1536]: time="2025-08-13T00:01:53.932103528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:53.933541 containerd[1536]: time="2025-08-13T00:01:53.933502910Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 1.084363166s" Aug 13 00:01:53.933541 containerd[1536]: time="2025-08-13T00:01:53.933538029Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 13 00:01:53.934222 containerd[1536]: time="2025-08-13T00:01:53.934184742Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:01:54.882460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3223834380.mount: Deactivated successfully. Aug 13 00:01:55.250534 containerd[1536]: time="2025-08-13T00:01:55.250287857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:55.253816 containerd[1536]: time="2025-08-13T00:01:55.253761584Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26915995" Aug 13 00:01:55.254864 containerd[1536]: time="2025-08-13T00:01:55.254816728Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:55.257163 containerd[1536]: time="2025-08-13T00:01:55.257113367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:55.258188 containerd[1536]: time="2025-08-13T00:01:55.257779754Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 1.323556176s" Aug 13 00:01:55.258188 containerd[1536]: time="2025-08-13T00:01:55.257810061Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 13 00:01:55.258478 containerd[1536]: time="2025-08-13T00:01:55.258428493Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:01:55.925882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2658178868.mount: Deactivated successfully. Aug 13 00:01:56.628653 containerd[1536]: time="2025-08-13T00:01:56.628598829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:56.629913 containerd[1536]: time="2025-08-13T00:01:56.629880577Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Aug 13 00:01:56.631266 containerd[1536]: time="2025-08-13T00:01:56.631216121Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:56.634486 containerd[1536]: time="2025-08-13T00:01:56.633857814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:56.635245 containerd[1536]: time="2025-08-13T00:01:56.635199669Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.37671064s" Aug 13 00:01:56.635245 containerd[1536]: time="2025-08-13T00:01:56.635243521Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:01:56.635695 containerd[1536]: time="2025-08-13T00:01:56.635666151Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:01:57.083953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542561956.mount: Deactivated successfully. Aug 13 00:01:57.093204 containerd[1536]: time="2025-08-13T00:01:57.093151347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:57.094075 containerd[1536]: time="2025-08-13T00:01:57.094035556Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 13 00:01:57.095183 containerd[1536]: time="2025-08-13T00:01:57.095130522Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:57.100610 containerd[1536]: time="2025-08-13T00:01:57.100520104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:57.101459 containerd[1536]: time="2025-08-13T00:01:57.101336445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 465.615337ms" Aug 13 00:01:57.101459 containerd[1536]: time="2025-08-13T00:01:57.101373595Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:01:57.102241 containerd[1536]: time="2025-08-13T00:01:57.102211067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:01:57.715602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount85594371.mount: Deactivated successfully. Aug 13 00:01:59.074939 containerd[1536]: time="2025-08-13T00:01:59.074853539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:59.077252 containerd[1536]: time="2025-08-13T00:01:59.077209111Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Aug 13 00:01:59.078639 containerd[1536]: time="2025-08-13T00:01:59.078578779Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:59.082318 containerd[1536]: time="2025-08-13T00:01:59.082262862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:59.083797 containerd[1536]: time="2025-08-13T00:01:59.083660142Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.981414879s" Aug 13 00:01:59.083797 containerd[1536]: time="2025-08-13T00:01:59.083699901Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 13 00:02:03.526261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:02:03.535738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:02:03.547302 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:02:03.547430 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:02:03.547795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:02:03.550583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:02:03.575764 systemd[1]: Reloading requested from client PID 2144 ('systemctl') (unit session-7.scope)... Aug 13 00:02:03.575784 systemd[1]: Reloading... Aug 13 00:02:03.651473 zram_generator::config[2181]: No configuration found. Aug 13 00:02:03.913802 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:02:03.970663 systemd[1]: Reloading finished in 394 ms. Aug 13 00:02:04.014914 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:02:04.017784 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:02:04.018045 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:02:04.020653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:02:04.126560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:02:04.131267 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:02:04.181519 kubelet[2243]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:02:04.181519 kubelet[2243]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:02:04.181519 kubelet[2243]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:02:04.181878 kubelet[2243]: I0813 00:02:04.181519 2243 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:02:05.174811 kubelet[2243]: I0813 00:02:05.174758 2243 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:02:05.174811 kubelet[2243]: I0813 00:02:05.174793 2243 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:02:05.175101 kubelet[2243]: I0813 00:02:05.175071 2243 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:02:05.207313 kubelet[2243]: E0813 00:02:05.207251 2243 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:05.208462 kubelet[2243]: I0813 00:02:05.208255 2243 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:02:05.215410 kubelet[2243]: E0813 00:02:05.215375 2243 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:02:05.215410 kubelet[2243]: I0813 00:02:05.215413 2243 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:02:05.219396 kubelet[2243]: I0813 00:02:05.219367 2243 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:02:05.223712 kubelet[2243]: I0813 00:02:05.223669 2243 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:02:05.228615 kubelet[2243]: I0813 00:02:05.228560 2243 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:02:05.228797 kubelet[2243]: I0813 00:02:05.228607 2243 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:02:05.228946 kubelet[2243]: I0813 00:02:05.228922 2243 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:02:05.228946 kubelet[2243]: I0813 00:02:05.228934 2243 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:02:05.231075 kubelet[2243]: I0813 00:02:05.231044 2243 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:02:05.234382 kubelet[2243]: I0813 00:02:05.234349 2243 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:02:05.234480 kubelet[2243]: I0813 00:02:05.234388 2243 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:02:05.234480 kubelet[2243]: I0813 00:02:05.234411 2243 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:02:05.234532 kubelet[2243]: I0813 00:02:05.234510 2243 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:02:05.241461 kubelet[2243]: I0813 00:02:05.240719 2243 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:02:05.242652 kubelet[2243]: I0813 00:02:05.241696 2243 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:02:05.242652 kubelet[2243]: W0813 00:02:05.242022 2243 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:02:05.242652 kubelet[2243]: W0813 00:02:05.242547 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Aug 13 00:02:05.243486 kubelet[2243]: E0813 00:02:05.242607 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:05.243979 kubelet[2243]: W0813 00:02:05.243929 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Aug 13 00:02:05.244041 kubelet[2243]: E0813 00:02:05.243998 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:05.244527 kubelet[2243]: I0813 00:02:05.244502 2243 server.go:1274] "Started kubelet" Aug 13 00:02:05.247417 kubelet[2243]: I0813 00:02:05.247302 2243 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:02:05.248811 kubelet[2243]: I0813 00:02:05.248779 2243 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:02:05.248997 kubelet[2243]: E0813 00:02:05.248944 2243 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:02:05.249099 kubelet[2243]: I0813 00:02:05.249081 2243 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:02:05.249337 kubelet[2243]: I0813 00:02:05.249156 2243 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:02:05.251909 kubelet[2243]: I0813 00:02:05.251733 2243 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:02:05.252940 kubelet[2243]: I0813 00:02:05.252875 2243 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:02:05.254126 kubelet[2243]: I0813 00:02:05.254081 2243 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:02:05.255153 kubelet[2243]: I0813 00:02:05.255056 2243 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:02:05.255466 kubelet[2243]: I0813 00:02:05.255315 2243 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:02:05.255571 kubelet[2243]: E0813 00:02:05.255529 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" Aug 13 00:02:05.255675 kubelet[2243]: W0813 00:02:05.255624 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Aug 13 00:02:05.255717 kubelet[2243]: E0813 00:02:05.255676 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:05.259907 kubelet[2243]: E0813 00:02:05.256998 2243 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2a9b0e9e14fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:02:05.244478714 +0000 UTC m=+1.109708145,LastTimestamp:2025-08-13 00:02:05.244478714 +0000 UTC m=+1.109708145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:02:05.260618 kubelet[2243]: I0813 00:02:05.260539 2243 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:02:05.260618 kubelet[2243]: I0813 00:02:05.260557 2243 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:02:05.260734 kubelet[2243]: I0813 00:02:05.260666 2243 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:02:05.271867 kubelet[2243]: I0813 00:02:05.271516 2243 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:02:05.273058 kubelet[2243]: I0813 00:02:05.273028 2243 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:02:05.273218 kubelet[2243]: I0813 00:02:05.273151 2243 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:02:05.273218 kubelet[2243]: I0813 00:02:05.273178 2243 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:02:05.273348 kubelet[2243]: E0813 00:02:05.273321 2243 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:02:05.274051 kubelet[2243]: W0813 00:02:05.274011 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Aug 13 00:02:05.274269 kubelet[2243]: E0813 00:02:05.274223 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:05.274485 kubelet[2243]: E0813 00:02:05.273113 2243 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:02:05.278931 kubelet[2243]: I0813 00:02:05.278699 2243 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:02:05.278931 kubelet[2243]: I0813 00:02:05.278910 2243 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:02:05.278931 kubelet[2243]: I0813 00:02:05.278931 2243 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:02:05.350138 kubelet[2243]: E0813 00:02:05.350073 2243 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:02:05.355328 kubelet[2243]: I0813 00:02:05.355297 2243 policy_none.go:49] "None policy: Start" Aug 13 00:02:05.357979 kubelet[2243]: I0813 00:02:05.357908 2243 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:02:05.357979 kubelet[2243]: I0813 00:02:05.357938 2243 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:02:05.364517 kubelet[2243]: I0813 00:02:05.363925 2243 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:02:05.364517 kubelet[2243]: I0813 00:02:05.364121 2243 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:02:05.364517 kubelet[2243]: I0813 00:02:05.364132 2243 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:02:05.364517 kubelet[2243]: I0813 00:02:05.364383 2243 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:02:05.365540 kubelet[2243]: E0813 00:02:05.365519 2243 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:02:05.451263 kubelet[2243]: I0813 00:02:05.450481 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90187012e4db3b137be8082cf7641b05-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"90187012e4db3b137be8082cf7641b05\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:02:05.451263 kubelet[2243]: I0813 00:02:05.450529 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:02:05.451263 kubelet[2243]: I0813 00:02:05.450549 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:02:05.451263 kubelet[2243]: I0813 00:02:05.450565 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:02:05.451263 kubelet[2243]: I0813 00:02:05.450582 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90187012e4db3b137be8082cf7641b05-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"90187012e4db3b137be8082cf7641b05\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:02:05.451494 kubelet[2243]: I0813 00:02:05.450596 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90187012e4db3b137be8082cf7641b05-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"90187012e4db3b137be8082cf7641b05\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:02:05.451494 kubelet[2243]: I0813 00:02:05.450612 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:02:05.451494 kubelet[2243]: I0813 00:02:05.450627 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:02:05.451494 kubelet[2243]: I0813 00:02:05.450641 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:02:05.456920 kubelet[2243]: E0813 00:02:05.456864 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" Aug 13 00:02:05.466462 kubelet[2243]: I0813 00:02:05.466293 2243 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:02:05.466871 kubelet[2243]: E0813 00:02:05.466824 2243 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Aug 13 00:02:05.667945 kubelet[2243]: I0813 00:02:05.667900 2243 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:02:05.668324 kubelet[2243]: E0813 00:02:05.668292 2243 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Aug 13 00:02:05.680735 kubelet[2243]: E0813 00:02:05.680642 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:05.681389 containerd[1536]: time="2025-08-13T00:02:05.681218570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:90187012e4db3b137be8082cf7641b05,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:05.682399 kubelet[2243]: E0813 00:02:05.682379 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:05.682399 kubelet[2243]: E0813 00:02:05.682394 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:05.682805 containerd[1536]: time="2025-08-13T00:02:05.682775330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:05.683127 containerd[1536]: time="2025-08-13T00:02:05.683023296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:05.857862 kubelet[2243]: E0813 00:02:05.857740 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" Aug 13 00:02:06.069960 kubelet[2243]: I0813 00:02:06.069919 2243 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:02:06.070347 kubelet[2243]: E0813 00:02:06.070303 2243 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Aug 13 00:02:06.439099 kubelet[2243]: W0813 00:02:06.439023 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Aug 13 00:02:06.439099 kubelet[2243]: E0813 00:02:06.439099 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:06.526348 kubelet[2243]: W0813 00:02:06.526252 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Aug 13 00:02:06.526348 kubelet[2243]: E0813 00:02:06.526329 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:06.558498 kubelet[2243]: W0813 00:02:06.558401 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Aug 13 00:02:06.558498 kubelet[2243]: E0813 00:02:06.558503 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:06.575822 kubelet[2243]: W0813 00:02:06.575736 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Aug 13 00:02:06.575822 kubelet[2243]: E0813 00:02:06.575820 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:06.615658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount20113980.mount: Deactivated successfully. Aug 13 00:02:06.629472 containerd[1536]: time="2025-08-13T00:02:06.629406156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:02:06.634217 containerd[1536]: time="2025-08-13T00:02:06.634168070Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:02:06.635211 containerd[1536]: time="2025-08-13T00:02:06.635148393Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:02:06.637110 containerd[1536]: time="2025-08-13T00:02:06.637018517Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:02:06.638021 containerd[1536]: time="2025-08-13T00:02:06.637982407Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:02:06.638274 containerd[1536]: time="2025-08-13T00:02:06.638214073Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:02:06.640013 containerd[1536]: time="2025-08-13T00:02:06.639935377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 13 00:02:06.641951 containerd[1536]: time="2025-08-13T00:02:06.641879111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:02:06.643031 containerd[1536]: time="2025-08-13T00:02:06.642965231Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 959.955648ms" Aug 13 00:02:06.649984 containerd[1536]: time="2025-08-13T00:02:06.649936012Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 966.846826ms" Aug 13 00:02:06.650911 containerd[1536]: time="2025-08-13T00:02:06.650786028Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 969.459947ms" Aug 13 00:02:06.659322 kubelet[2243]: E0813 00:02:06.658968 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="1.6s" Aug 13 00:02:06.801767 containerd[1536]: time="2025-08-13T00:02:06.801003347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:06.801767 containerd[1536]: time="2025-08-13T00:02:06.801110663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:06.801767 containerd[1536]: time="2025-08-13T00:02:06.801156885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:06.801767 containerd[1536]: time="2025-08-13T00:02:06.801307624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:06.803448 containerd[1536]: time="2025-08-13T00:02:06.803314332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:06.803509 containerd[1536]: time="2025-08-13T00:02:06.803479145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:06.803557 containerd[1536]: time="2025-08-13T00:02:06.803523008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:06.803679 containerd[1536]: time="2025-08-13T00:02:06.803650196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:06.807726 containerd[1536]: time="2025-08-13T00:02:06.806742985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:06.807726 containerd[1536]: time="2025-08-13T00:02:06.806830550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:06.807726 containerd[1536]: time="2025-08-13T00:02:06.806853061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:06.807726 containerd[1536]: time="2025-08-13T00:02:06.806952541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:06.862464 containerd[1536]: time="2025-08-13T00:02:06.861981882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"30c434d70fa4bbe00b969c0fec5c2ce25dfffca7ccaf0c9c23afb03066afb411\"" Aug 13 00:02:06.863151 kubelet[2243]: E0813 00:02:06.863119 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:06.868425 containerd[1536]: time="2025-08-13T00:02:06.865306857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"8caba35321a6d45814e9edc9de7ccb3f83d3bc1d197e4b422e60bf3bac375512\"" Aug 13 00:02:06.868425 containerd[1536]: time="2025-08-13T00:02:06.866149396Z" level=info msg="CreateContainer within sandbox \"30c434d70fa4bbe00b969c0fec5c2ce25dfffca7ccaf0c9c23afb03066afb411\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:02:06.868582 kubelet[2243]: E0813 00:02:06.866911 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:06.869772 containerd[1536]: time="2025-08-13T00:02:06.869706597Z" level=info msg="CreateContainer within sandbox \"8caba35321a6d45814e9edc9de7ccb3f83d3bc1d197e4b422e60bf3bac375512\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:02:06.872450 kubelet[2243]: I0813 00:02:06.872369 2243 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:02:06.872956 kubelet[2243]: E0813 00:02:06.872822 2243 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Aug 13 00:02:06.875353 containerd[1536]: time="2025-08-13T00:02:06.875294217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:90187012e4db3b137be8082cf7641b05,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0fcb14f5bb168e8d70c619288ca7ee3f03189366a6f115294ebaa05d5da4b70\"" Aug 13 00:02:06.876484 kubelet[2243]: E0813 00:02:06.876412 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:06.878378 containerd[1536]: time="2025-08-13T00:02:06.878340665Z" level=info msg="CreateContainer within sandbox \"c0fcb14f5bb168e8d70c619288ca7ee3f03189366a6f115294ebaa05d5da4b70\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:02:06.904347 containerd[1536]: time="2025-08-13T00:02:06.904298125Z" level=info msg="CreateContainer within sandbox \"30c434d70fa4bbe00b969c0fec5c2ce25dfffca7ccaf0c9c23afb03066afb411\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2ce9d5e1ca99d1070b4beb95b274e42caca62378615dc705240745a5eea79631\"" Aug 13 00:02:06.905343 containerd[1536]: time="2025-08-13T00:02:06.905210556Z" level=info msg="StartContainer for \"2ce9d5e1ca99d1070b4beb95b274e42caca62378615dc705240745a5eea79631\"" Aug 13 00:02:06.907202 containerd[1536]: time="2025-08-13T00:02:06.906614828Z" level=info msg="CreateContainer within sandbox \"8caba35321a6d45814e9edc9de7ccb3f83d3bc1d197e4b422e60bf3bac375512\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cda369314615ff6ef0dccafb29fb57e7facd7f37d31aec4f64fab36be34012a8\"" Aug 13 00:02:06.907202 containerd[1536]: time="2025-08-13T00:02:06.907097873Z" level=info msg="StartContainer for \"cda369314615ff6ef0dccafb29fb57e7facd7f37d31aec4f64fab36be34012a8\"" Aug 13 00:02:06.910047 containerd[1536]: time="2025-08-13T00:02:06.909992942Z" level=info msg="CreateContainer within sandbox \"c0fcb14f5bb168e8d70c619288ca7ee3f03189366a6f115294ebaa05d5da4b70\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc464221eb3255bcf504552600d79e63e7b0ba711122e7647b70c977d9f953fc\"" Aug 13 00:02:06.912326 containerd[1536]: time="2025-08-13T00:02:06.911164988Z" level=info msg="StartContainer for \"dc464221eb3255bcf504552600d79e63e7b0ba711122e7647b70c977d9f953fc\"" Aug 13 00:02:07.028403 containerd[1536]: time="2025-08-13T00:02:07.027729529Z" level=info msg="StartContainer for \"dc464221eb3255bcf504552600d79e63e7b0ba711122e7647b70c977d9f953fc\" returns successfully" Aug 13 00:02:07.028403 containerd[1536]: time="2025-08-13T00:02:07.027885633Z" level=info msg="StartContainer for \"2ce9d5e1ca99d1070b4beb95b274e42caca62378615dc705240745a5eea79631\" returns successfully" Aug 13 00:02:07.028403 containerd[1536]: time="2025-08-13T00:02:07.027917502Z" level=info msg="StartContainer for \"cda369314615ff6ef0dccafb29fb57e7facd7f37d31aec4f64fab36be34012a8\" returns successfully" Aug 13 00:02:07.280121 kubelet[2243]: E0813 00:02:07.280092 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:07.282192 kubelet[2243]: E0813 00:02:07.282159 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:07.284643 kubelet[2243]: E0813 00:02:07.284590 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:08.288477 kubelet[2243]: E0813 00:02:08.287899 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:08.475301 kubelet[2243]: I0813 00:02:08.474978 2243 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:02:08.741539 kubelet[2243]: E0813 00:02:08.741497 2243 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:02:08.851402 kubelet[2243]: I0813 00:02:08.850330 2243 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:02:08.851402 kubelet[2243]: E0813 00:02:08.850449 2243 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 00:02:08.868037 kubelet[2243]: E0813 00:02:08.867964 2243 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:02:09.240471 kubelet[2243]: I0813 00:02:09.240261 2243 apiserver.go:52] "Watching apiserver" Aug 13 00:02:09.249731 kubelet[2243]: I0813 00:02:09.249619 2243 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:02:09.302771 kubelet[2243]: E0813 00:02:09.302733 2243 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 00:02:09.303319 kubelet[2243]: E0813 00:02:09.302995 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:10.500904 kubelet[2243]: E0813 00:02:10.500856 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:11.292984 kubelet[2243]: E0813 00:02:11.292338 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:11.519967 kubelet[2243]: E0813 00:02:11.519938 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:11.665716 systemd[1]: Reloading requested from client PID 2531 ('systemctl') (unit session-7.scope)... Aug 13 00:02:11.665738 systemd[1]: Reloading... Aug 13 00:02:11.736476 zram_generator::config[2571]: No configuration found. Aug 13 00:02:11.861737 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:02:11.925998 systemd[1]: Reloading finished in 259 ms. Aug 13 00:02:11.965063 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:02:11.971942 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:02:11.972256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:02:11.986801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:02:12.101149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:02:12.107902 (kubelet)[2622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:02:12.152025 kubelet[2622]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:02:12.152025 kubelet[2622]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:02:12.152025 kubelet[2622]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:02:12.152416 kubelet[2622]: I0813 00:02:12.152089 2622 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:02:12.159616 kubelet[2622]: I0813 00:02:12.159564 2622 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:02:12.159616 kubelet[2622]: I0813 00:02:12.159599 2622 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:02:12.159923 kubelet[2622]: I0813 00:02:12.159886 2622 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:02:12.161421 kubelet[2622]: I0813 00:02:12.161377 2622 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:02:12.163700 kubelet[2622]: I0813 00:02:12.163548 2622 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:02:12.167981 kubelet[2622]: E0813 00:02:12.167834 2622 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:02:12.167981 kubelet[2622]: I0813 00:02:12.167873 2622 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:02:12.172149 kubelet[2622]: I0813 00:02:12.172110 2622 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:02:12.172610 kubelet[2622]: I0813 00:02:12.172555 2622 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:02:12.173046 kubelet[2622]: I0813 00:02:12.172819 2622 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:02:12.175569 kubelet[2622]: I0813 00:02:12.173049 2622 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:02:12.175569 kubelet[2622]: I0813 00:02:12.173561 2622 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:02:12.175569 kubelet[2622]: I0813 00:02:12.173573 2622 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:02:12.175569 kubelet[2622]: I0813 00:02:12.173618 2622 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:02:12.175569 kubelet[2622]: I0813 00:02:12.173720 2622 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:02:12.177239 kubelet[2622]: I0813 00:02:12.173733 2622 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:02:12.177239 kubelet[2622]: I0813 00:02:12.173752 2622 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:02:12.177239 kubelet[2622]: I0813 00:02:12.173765 2622 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:02:12.178710 kubelet[2622]: I0813 00:02:12.177324 2622 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:02:12.178710 kubelet[2622]: I0813 00:02:12.177908 2622 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:02:12.182607 kubelet[2622]: I0813 00:02:12.178414 2622 server.go:1274] "Started kubelet" Aug 13 00:02:12.182607 kubelet[2622]: I0813 00:02:12.179761 2622 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:02:12.182607 kubelet[2622]: I0813 00:02:12.180039 2622 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:02:12.182607 kubelet[2622]: I0813 00:02:12.180143 2622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:02:12.182607 kubelet[2622]: I0813 00:02:12.180246 2622 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:02:12.182607 kubelet[2622]: I0813 00:02:12.180257 2622 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:02:12.182607 kubelet[2622]: I0813 00:02:12.180715 2622 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:02:12.182607 kubelet[2622]: I0813 00:02:12.180815 2622 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:02:12.182607 kubelet[2622]: I0813 00:02:12.180930 2622 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:02:12.182607 kubelet[2622]: I0813 00:02:12.181175 2622 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:02:12.182607 kubelet[2622]: E0813 00:02:12.181238 2622 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:02:12.196316 kubelet[2622]: I0813 00:02:12.196231 2622 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:02:12.196655 kubelet[2622]: I0813 00:02:12.196367 2622 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:02:12.202484 kubelet[2622]: I0813 00:02:12.202272 2622 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:02:12.203648 kubelet[2622]: E0813 00:02:12.203617 2622 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:02:12.218876 kubelet[2622]: I0813 00:02:12.218817 2622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:02:12.220049 kubelet[2622]: I0813 00:02:12.220014 2622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:02:12.220049 kubelet[2622]: I0813 00:02:12.220050 2622 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:02:12.220140 kubelet[2622]: I0813 00:02:12.220072 2622 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:02:12.220167 kubelet[2622]: E0813 00:02:12.220135 2622 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:02:12.268557 kubelet[2622]: I0813 00:02:12.268389 2622 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:02:12.268557 kubelet[2622]: I0813 00:02:12.268410 2622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:02:12.268557 kubelet[2622]: I0813 00:02:12.268462 2622 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:02:12.268732 kubelet[2622]: I0813 00:02:12.268643 2622 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:02:12.268732 kubelet[2622]: I0813 00:02:12.268654 2622 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:02:12.268732 kubelet[2622]: I0813 00:02:12.268676 2622 policy_none.go:49] "None policy: Start" Aug 13 00:02:12.269707 kubelet[2622]: I0813 00:02:12.269461 2622 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:02:12.269707 kubelet[2622]: I0813 00:02:12.269485 2622 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:02:12.269707 kubelet[2622]: I0813 00:02:12.269665 2622 state_mem.go:75] "Updated machine memory state" Aug 13 00:02:12.270781 kubelet[2622]: I0813 00:02:12.270741 2622 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:02:12.270946 kubelet[2622]: I0813 00:02:12.270920 2622 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:02:12.270977 kubelet[2622]: I0813 00:02:12.270938 2622 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:02:12.272516 kubelet[2622]: I0813 00:02:12.272491 2622 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:02:12.336378 kubelet[2622]: E0813 00:02:12.336278 2622 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 00:02:12.338613 kubelet[2622]: E0813 00:02:12.338498 2622 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:02:12.375313 kubelet[2622]: I0813 00:02:12.375261 2622 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:02:12.389815 kubelet[2622]: I0813 00:02:12.389681 2622 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 00:02:12.389815 kubelet[2622]: I0813 00:02:12.389781 2622 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:02:12.482082 kubelet[2622]: I0813 00:02:12.481741 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90187012e4db3b137be8082cf7641b05-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"90187012e4db3b137be8082cf7641b05\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:02:12.482082 kubelet[2622]: I0813 00:02:12.481784 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90187012e4db3b137be8082cf7641b05-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"90187012e4db3b137be8082cf7641b05\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:02:12.482082 kubelet[2622]: I0813 00:02:12.481820 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:02:12.482082 kubelet[2622]: I0813 00:02:12.481839 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:02:12.482082 kubelet[2622]: I0813 00:02:12.481855 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:02:12.482316 kubelet[2622]: I0813 00:02:12.481872 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90187012e4db3b137be8082cf7641b05-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"90187012e4db3b137be8082cf7641b05\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:02:12.482316 kubelet[2622]: I0813 00:02:12.481891 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:02:12.482316 kubelet[2622]: I0813 00:02:12.481907 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:02:12.482316 kubelet[2622]: I0813 00:02:12.481924 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:02:12.631015 kubelet[2622]: E0813 00:02:12.630931 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:12.637155 kubelet[2622]: E0813 00:02:12.636818 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:12.638397 sudo[2657]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:02:12.638720 sudo[2657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:02:12.639010 kubelet[2622]: E0813 00:02:12.638954 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:13.085878 sudo[2657]: pam_unix(sudo:session): session closed for user root Aug 13 00:02:13.174549 kubelet[2622]: I0813 00:02:13.174501 2622 apiserver.go:52] "Watching apiserver" Aug 13 00:02:13.181921 kubelet[2622]: I0813 00:02:13.181863 2622 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:02:13.246218 kubelet[2622]: E0813 00:02:13.246166 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:13.246348 kubelet[2622]: E0813 00:02:13.246279 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:13.260777 kubelet[2622]: E0813 00:02:13.260486 2622 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 00:02:13.260777 kubelet[2622]: E0813 00:02:13.260694 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:13.265725 kubelet[2622]: I0813 00:02:13.265257 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.265242614 podStartE2EDuration="3.265242614s" podCreationTimestamp="2025-08-13 00:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:02:13.264988534 +0000 UTC m=+1.152414547" watchObservedRunningTime="2025-08-13 00:02:13.265242614 +0000 UTC m=+1.152668667" Aug 13 00:02:13.289957 kubelet[2622]: I0813 00:02:13.289885 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.289864384 podStartE2EDuration="1.289864384s" podCreationTimestamp="2025-08-13 00:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:02:13.278034023 +0000 UTC m=+1.165460116" watchObservedRunningTime="2025-08-13 00:02:13.289864384 +0000 UTC m=+1.177290437" Aug 13 00:02:13.290622 kubelet[2622]: I0813 00:02:13.290485 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.290470968 podStartE2EDuration="2.290470968s" podCreationTimestamp="2025-08-13 00:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:02:13.290244364 +0000 UTC m=+1.177670417" watchObservedRunningTime="2025-08-13 00:02:13.290470968 +0000 UTC m=+1.177897021" Aug 13 00:02:14.247297 kubelet[2622]: E0813 00:02:14.247246 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:14.247806 kubelet[2622]: E0813 00:02:14.247772 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:14.832350 kubelet[2622]: E0813 00:02:14.832296 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:15.190280 sudo[1739]: pam_unix(sudo:session): session closed for user root Aug 13 00:02:15.192019 sshd[1733]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:15.195752 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:60890.service: Deactivated successfully. Aug 13 00:02:15.203593 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:02:15.203609 systemd-logind[1511]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:02:15.206117 systemd-logind[1511]: Removed session 7. Aug 13 00:02:15.248966 kubelet[2622]: E0813 00:02:15.248870 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:17.062266 kubelet[2622]: E0813 00:02:17.062223 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:17.254543 kubelet[2622]: E0813 00:02:17.253138 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:18.988931 kubelet[2622]: I0813 00:02:18.988881 2622 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:02:18.989572 containerd[1536]: time="2025-08-13T00:02:18.989461614Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:02:18.989926 kubelet[2622]: I0813 00:02:18.989903 2622 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:02:20.037112 kubelet[2622]: I0813 00:02:20.037017 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f5c9c53-80c8-4350-9012-8dad22ddefe1-lib-modules\") pod \"kube-proxy-qd55d\" (UID: \"6f5c9c53-80c8-4350-9012-8dad22ddefe1\") " pod="kube-system/kube-proxy-qd55d" Aug 13 00:02:20.037112 kubelet[2622]: I0813 00:02:20.037078 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-cni-path\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.037112 kubelet[2622]: I0813 00:02:20.037128 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6f5c9c53-80c8-4350-9012-8dad22ddefe1-kube-proxy\") pod \"kube-proxy-qd55d\" (UID: \"6f5c9c53-80c8-4350-9012-8dad22ddefe1\") " pod="kube-system/kube-proxy-qd55d" Aug 13 00:02:20.040640 kubelet[2622]: I0813 00:02:20.037151 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f5c9c53-80c8-4350-9012-8dad22ddefe1-xtables-lock\") pod \"kube-proxy-qd55d\" (UID: \"6f5c9c53-80c8-4350-9012-8dad22ddefe1\") " pod="kube-system/kube-proxy-qd55d" Aug 13 00:02:20.040640 kubelet[2622]: I0813 00:02:20.037170 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-hostproc\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.040640 kubelet[2622]: I0813 00:02:20.037233 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-cilium-cgroup\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.040640 kubelet[2622]: I0813 00:02:20.037254 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-etc-cni-netd\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.040640 kubelet[2622]: I0813 00:02:20.037308 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-xtables-lock\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.040640 kubelet[2622]: I0813 00:02:20.037327 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-bpf-maps\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.040917 kubelet[2622]: I0813 00:02:20.037373 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c385021-e474-493a-b72e-5249c52d7ce5-clustermesh-secrets\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.040917 kubelet[2622]: I0813 00:02:20.037393 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c385021-e474-493a-b72e-5249c52d7ce5-cilium-config-path\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.040917 kubelet[2622]: I0813 00:02:20.037453 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89c5f\" (UniqueName: \"kubernetes.io/projected/6f5c9c53-80c8-4350-9012-8dad22ddefe1-kube-api-access-89c5f\") pod \"kube-proxy-qd55d\" (UID: \"6f5c9c53-80c8-4350-9012-8dad22ddefe1\") " pod="kube-system/kube-proxy-qd55d" Aug 13 00:02:20.040917 kubelet[2622]: I0813 00:02:20.037479 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx6gj\" (UniqueName: \"kubernetes.io/projected/6c385021-e474-493a-b72e-5249c52d7ce5-kube-api-access-tx6gj\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.040917 kubelet[2622]: I0813 00:02:20.037603 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-cilium-run\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.041039 kubelet[2622]: I0813 00:02:20.037623 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-lib-modules\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.041039 kubelet[2622]: I0813 00:02:20.037641 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-host-proc-sys-net\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.041039 kubelet[2622]: I0813 00:02:20.037669 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-host-proc-sys-kernel\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.041039 kubelet[2622]: I0813 00:02:20.037685 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c385021-e474-493a-b72e-5249c52d7ce5-hubble-tls\") pod \"cilium-xr82c\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " pod="kube-system/cilium-xr82c" Aug 13 00:02:20.137984 kubelet[2622]: I0813 00:02:20.137922 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ab45edf-35b4-4275-b91d-b51c4f1e4e50-cilium-config-path\") pod \"cilium-operator-5d85765b45-bk9n9\" (UID: \"9ab45edf-35b4-4275-b91d-b51c4f1e4e50\") " pod="kube-system/cilium-operator-5d85765b45-bk9n9" Aug 13 00:02:20.138615 kubelet[2622]: I0813 00:02:20.138152 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zkq2\" (UniqueName: \"kubernetes.io/projected/9ab45edf-35b4-4275-b91d-b51c4f1e4e50-kube-api-access-5zkq2\") pod \"cilium-operator-5d85765b45-bk9n9\" (UID: \"9ab45edf-35b4-4275-b91d-b51c4f1e4e50\") " pod="kube-system/cilium-operator-5d85765b45-bk9n9" Aug 13 00:02:20.213421 kubelet[2622]: E0813 00:02:20.213383 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:20.214516 containerd[1536]: time="2025-08-13T00:02:20.214376259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qd55d,Uid:6f5c9c53-80c8-4350-9012-8dad22ddefe1,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:20.226301 kubelet[2622]: E0813 00:02:20.223705 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:20.226493 containerd[1536]: time="2025-08-13T00:02:20.224269828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xr82c,Uid:6c385021-e474-493a-b72e-5249c52d7ce5,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:20.363065 kubelet[2622]: E0813 00:02:20.362341 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:20.364150 containerd[1536]: time="2025-08-13T00:02:20.363881601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bk9n9,Uid:9ab45edf-35b4-4275-b91d-b51c4f1e4e50,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:20.434777 containerd[1536]: time="2025-08-13T00:02:20.434651581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:20.434777 containerd[1536]: time="2025-08-13T00:02:20.434770458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:20.434996 containerd[1536]: time="2025-08-13T00:02:20.434789017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:20.434996 containerd[1536]: time="2025-08-13T00:02:20.434902454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:20.449467 containerd[1536]: time="2025-08-13T00:02:20.449260741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:20.449467 containerd[1536]: time="2025-08-13T00:02:20.449406857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:20.449467 containerd[1536]: time="2025-08-13T00:02:20.449424456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:20.452177 containerd[1536]: time="2025-08-13T00:02:20.449670649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:20.464045 containerd[1536]: time="2025-08-13T00:02:20.461304731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:20.464045 containerd[1536]: time="2025-08-13T00:02:20.463982217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:20.464045 containerd[1536]: time="2025-08-13T00:02:20.464021176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:20.465754 containerd[1536]: time="2025-08-13T00:02:20.464515403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:20.507211 containerd[1536]: time="2025-08-13T00:02:20.507057836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qd55d,Uid:6f5c9c53-80c8-4350-9012-8dad22ddefe1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d4c2bb68692355ec8adc8bc1692c453ae7c515333f8f84ccbe6ca38bb4bc042\"" Aug 13 00:02:20.512564 kubelet[2622]: E0813 00:02:20.512488 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:20.520391 containerd[1536]: time="2025-08-13T00:02:20.519883005Z" level=info msg="CreateContainer within sandbox \"2d4c2bb68692355ec8adc8bc1692c453ae7c515333f8f84ccbe6ca38bb4bc042\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:02:20.522899 containerd[1536]: time="2025-08-13T00:02:20.522837324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xr82c,Uid:6c385021-e474-493a-b72e-5249c52d7ce5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\"" Aug 13 00:02:20.523667 kubelet[2622]: E0813 00:02:20.523529 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:20.526043 containerd[1536]: time="2025-08-13T00:02:20.525992197Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:02:20.539430 containerd[1536]: time="2025-08-13T00:02:20.539372951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bk9n9,Uid:9ab45edf-35b4-4275-b91d-b51c4f1e4e50,Namespace:kube-system,Attempt:0,} returns sandbox id \"13dadff532615f5d87347ee6df77e353360ac745209c358b9215fe1f0c6314b7\"" Aug 13 00:02:20.540819 kubelet[2622]: E0813 00:02:20.540446 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:20.580625 containerd[1536]: time="2025-08-13T00:02:20.580535902Z" level=info msg="CreateContainer within sandbox \"2d4c2bb68692355ec8adc8bc1692c453ae7c515333f8f84ccbe6ca38bb4bc042\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc15be0ac61673f32e00cc917afed7d7918b418f1fb5a3950093e453eec6f5f0\"" Aug 13 00:02:20.581480 containerd[1536]: time="2025-08-13T00:02:20.581238003Z" level=info msg="StartContainer for \"cc15be0ac61673f32e00cc917afed7d7918b418f1fb5a3950093e453eec6f5f0\"" Aug 13 00:02:20.637416 containerd[1536]: time="2025-08-13T00:02:20.637301906Z" level=info msg="StartContainer for \"cc15be0ac61673f32e00cc917afed7d7918b418f1fb5a3950093e453eec6f5f0\" returns successfully" Aug 13 00:02:21.268279 kubelet[2622]: E0813 00:02:21.267484 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:24.149405 kubelet[2622]: E0813 00:02:24.149357 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:24.168403 kubelet[2622]: I0813 00:02:24.168336 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qd55d" podStartSLOduration=5.167596623 podStartE2EDuration="5.167596623s" podCreationTimestamp="2025-08-13 00:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:02:21.283484523 +0000 UTC m=+9.170910576" watchObservedRunningTime="2025-08-13 00:02:24.167596623 +0000 UTC m=+12.055022676" Aug 13 00:02:24.277548 kubelet[2622]: E0813 00:02:24.277149 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:24.777261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754037780.mount: Deactivated successfully. Aug 13 00:02:24.960342 kubelet[2622]: E0813 00:02:24.960304 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:26.397145 containerd[1536]: time="2025-08-13T00:02:26.397081751Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:26.398260 containerd[1536]: time="2025-08-13T00:02:26.397718859Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Aug 13 00:02:26.399384 containerd[1536]: time="2025-08-13T00:02:26.399183389Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:26.408612 containerd[1536]: time="2025-08-13T00:02:26.408533243Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.882484807s" Aug 13 00:02:26.408612 containerd[1536]: time="2025-08-13T00:02:26.408605761Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 13 00:02:26.412014 containerd[1536]: time="2025-08-13T00:02:26.411540903Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:02:26.416555 containerd[1536]: time="2025-08-13T00:02:26.416481484Z" level=info msg="CreateContainer within sandbox \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:02:26.441765 containerd[1536]: time="2025-08-13T00:02:26.441645381Z" level=info msg="CreateContainer within sandbox \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3\"" Aug 13 00:02:26.442296 containerd[1536]: time="2025-08-13T00:02:26.442222970Z" level=info msg="StartContainer for \"8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3\"" Aug 13 00:02:26.494983 containerd[1536]: time="2025-08-13T00:02:26.494937597Z" level=info msg="StartContainer for \"8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3\" returns successfully" Aug 13 00:02:26.719248 containerd[1536]: time="2025-08-13T00:02:26.716782285Z" level=info msg="shim disconnected" id=8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3 namespace=k8s.io Aug 13 00:02:26.719528 containerd[1536]: time="2025-08-13T00:02:26.719254315Z" level=warning msg="cleaning up after shim disconnected" id=8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3 namespace=k8s.io Aug 13 00:02:26.719528 containerd[1536]: time="2025-08-13T00:02:26.719271155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:02:27.068598 update_engine[1516]: I20250813 00:02:27.067641 1516 update_attempter.cc:509] Updating boot flags... Aug 13 00:02:27.122891 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3085) Aug 13 00:02:27.163522 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3089) Aug 13 00:02:27.293928 kubelet[2622]: E0813 00:02:27.293737 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:27.296104 containerd[1536]: time="2025-08-13T00:02:27.296063442Z" level=info msg="CreateContainer within sandbox \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:02:27.420759 containerd[1536]: time="2025-08-13T00:02:27.420612396Z" level=info msg="CreateContainer within sandbox \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2\"" Aug 13 00:02:27.421901 containerd[1536]: time="2025-08-13T00:02:27.421815893Z" level=info msg="StartContainer for \"f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2\"" Aug 13 00:02:27.434617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3-rootfs.mount: Deactivated successfully. Aug 13 00:02:27.485490 containerd[1536]: time="2025-08-13T00:02:27.485409285Z" level=info msg="StartContainer for \"f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2\" returns successfully" Aug 13 00:02:27.584832 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:02:27.585371 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:02:27.585541 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:02:27.597826 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:02:27.614211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2-rootfs.mount: Deactivated successfully. Aug 13 00:02:27.619700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:02:27.629136 containerd[1536]: time="2025-08-13T00:02:27.629029717Z" level=info msg="shim disconnected" id=f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2 namespace=k8s.io Aug 13 00:02:27.629136 containerd[1536]: time="2025-08-13T00:02:27.629088796Z" level=warning msg="cleaning up after shim disconnected" id=f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2 namespace=k8s.io Aug 13 00:02:27.629136 containerd[1536]: time="2025-08-13T00:02:27.629098836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:02:27.899993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4158765011.mount: Deactivated successfully. Aug 13 00:02:28.297104 kubelet[2622]: E0813 00:02:28.297067 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:28.300576 containerd[1536]: time="2025-08-13T00:02:28.300338201Z" level=info msg="CreateContainer within sandbox \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:02:28.344951 containerd[1536]: time="2025-08-13T00:02:28.344899995Z" level=info msg="CreateContainer within sandbox \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277\"" Aug 13 00:02:28.346619 containerd[1536]: time="2025-08-13T00:02:28.346585205Z" level=info msg="StartContainer for \"11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277\"" Aug 13 00:02:28.362172 containerd[1536]: time="2025-08-13T00:02:28.362111164Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:28.363055 containerd[1536]: time="2025-08-13T00:02:28.362897670Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Aug 13 00:02:28.365416 containerd[1536]: time="2025-08-13T00:02:28.365372745Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:28.367706 containerd[1536]: time="2025-08-13T00:02:28.367032515Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.955433973s" Aug 13 00:02:28.367706 containerd[1536]: time="2025-08-13T00:02:28.367074794Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 13 00:02:28.370209 containerd[1536]: time="2025-08-13T00:02:28.370153219Z" level=info msg="CreateContainer within sandbox \"13dadff532615f5d87347ee6df77e353360ac745209c358b9215fe1f0c6314b7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:02:28.388457 containerd[1536]: time="2025-08-13T00:02:28.388058095Z" level=info msg="CreateContainer within sandbox \"13dadff532615f5d87347ee6df77e353360ac745209c358b9215fe1f0c6314b7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\"" Aug 13 00:02:28.388935 containerd[1536]: time="2025-08-13T00:02:28.388898000Z" level=info msg="StartContainer for \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\"" Aug 13 00:02:28.438479 containerd[1536]: time="2025-08-13T00:02:28.436829494Z" level=info msg="StartContainer for \"11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277\" returns successfully" Aug 13 00:02:28.467465 containerd[1536]: time="2025-08-13T00:02:28.467402381Z" level=info msg="StartContainer for \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\" returns successfully" Aug 13 00:02:28.476849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277-rootfs.mount: Deactivated successfully. Aug 13 00:02:28.554976 containerd[1536]: time="2025-08-13T00:02:28.554746842Z" level=info msg="shim disconnected" id=11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277 namespace=k8s.io Aug 13 00:02:28.554976 containerd[1536]: time="2025-08-13T00:02:28.554894880Z" level=warning msg="cleaning up after shim disconnected" id=11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277 namespace=k8s.io Aug 13 00:02:28.554976 containerd[1536]: time="2025-08-13T00:02:28.554904759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:02:29.316380 kubelet[2622]: E0813 00:02:29.316332 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:29.325027 kubelet[2622]: E0813 00:02:29.324922 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:29.335958 kubelet[2622]: I0813 00:02:29.335871 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-bk9n9" podStartSLOduration=1.508763621 podStartE2EDuration="9.335854094s" podCreationTimestamp="2025-08-13 00:02:20 +0000 UTC" firstStartedPulling="2025-08-13 00:02:20.541421975 +0000 UTC m=+8.428848028" lastFinishedPulling="2025-08-13 00:02:28.368512448 +0000 UTC m=+16.255938501" observedRunningTime="2025-08-13 00:02:29.335768655 +0000 UTC m=+17.223194708" watchObservedRunningTime="2025-08-13 00:02:29.335854094 +0000 UTC m=+17.223280147" Aug 13 00:02:29.338820 containerd[1536]: time="2025-08-13T00:02:29.338743364Z" level=info msg="CreateContainer within sandbox \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:02:29.364708 containerd[1536]: time="2025-08-13T00:02:29.364649198Z" level=info msg="CreateContainer within sandbox \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3\"" Aug 13 00:02:29.365645 containerd[1536]: time="2025-08-13T00:02:29.365431985Z" level=info msg="StartContainer for \"9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3\"" Aug 13 00:02:29.433233 containerd[1536]: time="2025-08-13T00:02:29.433181179Z" level=info msg="StartContainer for \"9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3\" returns successfully" Aug 13 00:02:29.451470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3-rootfs.mount: Deactivated successfully. Aug 13 00:02:29.455244 containerd[1536]: time="2025-08-13T00:02:29.455176760Z" level=info msg="shim disconnected" id=9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3 namespace=k8s.io Aug 13 00:02:29.455720 containerd[1536]: time="2025-08-13T00:02:29.455252359Z" level=warning msg="cleaning up after shim disconnected" id=9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3 namespace=k8s.io Aug 13 00:02:29.455720 containerd[1536]: time="2025-08-13T00:02:29.455265038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:02:30.329814 kubelet[2622]: E0813 00:02:30.329624 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:30.329814 kubelet[2622]: E0813 00:02:30.329707 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:30.342152 containerd[1536]: time="2025-08-13T00:02:30.342086251Z" level=info msg="CreateContainer within sandbox \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:02:30.393305 containerd[1536]: time="2025-08-13T00:02:30.393243492Z" level=info msg="CreateContainer within sandbox \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\"" Aug 13 00:02:30.394892 containerd[1536]: time="2025-08-13T00:02:30.394852146Z" level=info msg="StartContainer for \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\"" Aug 13 00:02:30.465970 containerd[1536]: time="2025-08-13T00:02:30.465907220Z" level=info msg="StartContainer for \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\" returns successfully" Aug 13 00:02:30.646550 kubelet[2622]: I0813 00:02:30.646429 2622 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:02:30.823026 kubelet[2622]: I0813 00:02:30.822960 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phjqt\" (UniqueName: \"kubernetes.io/projected/798d8db6-8626-41fa-b7b8-a2a68f9da19d-kube-api-access-phjqt\") pod \"coredns-7c65d6cfc9-l6kg6\" (UID: \"798d8db6-8626-41fa-b7b8-a2a68f9da19d\") " pod="kube-system/coredns-7c65d6cfc9-l6kg6" Aug 13 00:02:30.823026 kubelet[2622]: I0813 00:02:30.823035 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa7b8481-1ac0-48ee-b34a-7cd2671c97a4-config-volume\") pod \"coredns-7c65d6cfc9-vjf4t\" (UID: \"aa7b8481-1ac0-48ee-b34a-7cd2671c97a4\") " pod="kube-system/coredns-7c65d6cfc9-vjf4t" Aug 13 00:02:30.823200 kubelet[2622]: I0813 00:02:30.823057 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9sf\" (UniqueName: \"kubernetes.io/projected/aa7b8481-1ac0-48ee-b34a-7cd2671c97a4-kube-api-access-pk9sf\") pod \"coredns-7c65d6cfc9-vjf4t\" (UID: \"aa7b8481-1ac0-48ee-b34a-7cd2671c97a4\") " pod="kube-system/coredns-7c65d6cfc9-vjf4t" Aug 13 00:02:30.823200 kubelet[2622]: I0813 00:02:30.823079 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/798d8db6-8626-41fa-b7b8-a2a68f9da19d-config-volume\") pod \"coredns-7c65d6cfc9-l6kg6\" (UID: \"798d8db6-8626-41fa-b7b8-a2a68f9da19d\") " pod="kube-system/coredns-7c65d6cfc9-l6kg6" Aug 13 00:02:30.998541 kubelet[2622]: E0813 00:02:30.998074 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:30.999680 containerd[1536]: time="2025-08-13T00:02:30.999621226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l6kg6,Uid:798d8db6-8626-41fa-b7b8-a2a68f9da19d,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:31.029556 kubelet[2622]: E0813 00:02:31.028856 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:31.030450 containerd[1536]: time="2025-08-13T00:02:31.030376545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vjf4t,Uid:aa7b8481-1ac0-48ee-b34a-7cd2671c97a4,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:31.336816 kubelet[2622]: E0813 00:02:31.335933 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:32.337883 kubelet[2622]: E0813 00:02:32.337842 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:32.747552 systemd-networkd[1229]: cilium_host: Link UP Aug 13 00:02:32.747684 systemd-networkd[1229]: cilium_net: Link UP Aug 13 00:02:32.747979 systemd-networkd[1229]: cilium_net: Gained carrier Aug 13 00:02:32.748097 systemd-networkd[1229]: cilium_host: Gained carrier Aug 13 00:02:32.748186 systemd-networkd[1229]: cilium_net: Gained IPv6LL Aug 13 00:02:32.748782 systemd-networkd[1229]: cilium_host: Gained IPv6LL Aug 13 00:02:32.849283 systemd-networkd[1229]: cilium_vxlan: Link UP Aug 13 00:02:32.849297 systemd-networkd[1229]: cilium_vxlan: Gained carrier Aug 13 00:02:33.239483 kernel: NET: Registered PF_ALG protocol family Aug 13 00:02:33.341838 kubelet[2622]: E0813 00:02:33.340785 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:33.918287 systemd-networkd[1229]: lxc_health: Link UP Aug 13 00:02:33.928044 systemd-networkd[1229]: lxc_health: Gained carrier Aug 13 00:02:34.132264 systemd-networkd[1229]: lxc87144a6c2139: Link UP Aug 13 00:02:34.143056 kernel: eth0: renamed from tmp1ea34 Aug 13 00:02:34.167521 systemd-networkd[1229]: cilium_vxlan: Gained IPv6LL Aug 13 00:02:34.168407 systemd-networkd[1229]: lxc87144a6c2139: Gained carrier Aug 13 00:02:34.169135 systemd-networkd[1229]: lxcfdd78180b379: Link UP Aug 13 00:02:34.174343 kernel: eth0: renamed from tmp6c3da Aug 13 00:02:34.180251 systemd-networkd[1229]: lxcfdd78180b379: Gained carrier Aug 13 00:02:34.252475 kubelet[2622]: I0813 00:02:34.250393 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xr82c" podStartSLOduration=9.363871727 podStartE2EDuration="15.250377603s" podCreationTimestamp="2025-08-13 00:02:19 +0000 UTC" firstStartedPulling="2025-08-13 00:02:20.524694673 +0000 UTC m=+8.412120726" lastFinishedPulling="2025-08-13 00:02:26.411200589 +0000 UTC m=+14.298626602" observedRunningTime="2025-08-13 00:02:31.369776715 +0000 UTC m=+19.257202768" watchObservedRunningTime="2025-08-13 00:02:34.250377603 +0000 UTC m=+22.137803616" Aug 13 00:02:34.342750 kubelet[2622]: E0813 00:02:34.342702 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:35.300634 systemd-networkd[1229]: lxcfdd78180b379: Gained IPv6LL Aug 13 00:02:35.492663 systemd-networkd[1229]: lxc_health: Gained IPv6LL Aug 13 00:02:35.876676 systemd-networkd[1229]: lxc87144a6c2139: Gained IPv6LL Aug 13 00:02:38.099602 containerd[1536]: time="2025-08-13T00:02:38.099475327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:38.099602 containerd[1536]: time="2025-08-13T00:02:38.099564845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:38.099602 containerd[1536]: time="2025-08-13T00:02:38.099580325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:38.100217 containerd[1536]: time="2025-08-13T00:02:38.099717244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:38.128621 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:02:38.131630 containerd[1536]: time="2025-08-13T00:02:38.129348143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:38.131805 containerd[1536]: time="2025-08-13T00:02:38.131739075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:38.131805 containerd[1536]: time="2025-08-13T00:02:38.131762155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:38.132018 containerd[1536]: time="2025-08-13T00:02:38.131928793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:38.157781 containerd[1536]: time="2025-08-13T00:02:38.157735896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l6kg6,Uid:798d8db6-8626-41fa-b7b8-a2a68f9da19d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c3da83a7c6002aa6aa301686e1e72197042e85bc39114520fdcd3836c15f979\"" Aug 13 00:02:38.158515 kubelet[2622]: E0813 00:02:38.158487 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:38.160520 containerd[1536]: time="2025-08-13T00:02:38.160486985Z" level=info msg="CreateContainer within sandbox \"6c3da83a7c6002aa6aa301686e1e72197042e85bc39114520fdcd3836c15f979\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:02:38.164704 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:02:38.185040 containerd[1536]: time="2025-08-13T00:02:38.184999662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vjf4t,Uid:aa7b8481-1ac0-48ee-b34a-7cd2671c97a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ea344da3b484f7b6a23dda9dab85665d37392d4f272e6b20bb7ab54c0871a89\"" Aug 13 00:02:38.185822 kubelet[2622]: E0813 00:02:38.185786 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:38.187345 containerd[1536]: time="2025-08-13T00:02:38.187310876Z" level=info msg="CreateContainer within sandbox \"1ea344da3b484f7b6a23dda9dab85665d37392d4f272e6b20bb7ab54c0871a89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:02:38.207672 containerd[1536]: time="2025-08-13T00:02:38.207614922Z" level=info msg="CreateContainer within sandbox \"6c3da83a7c6002aa6aa301686e1e72197042e85bc39114520fdcd3836c15f979\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38b7ff0f5d39aeaea36f6780fc954b46287df17a08cb79b3b8df48c5865be59e\"" Aug 13 00:02:38.208150 containerd[1536]: time="2025-08-13T00:02:38.208118836Z" level=info msg="StartContainer for \"38b7ff0f5d39aeaea36f6780fc954b46287df17a08cb79b3b8df48c5865be59e\"" Aug 13 00:02:38.212450 containerd[1536]: time="2025-08-13T00:02:38.212392787Z" level=info msg="CreateContainer within sandbox \"1ea344da3b484f7b6a23dda9dab85665d37392d4f272e6b20bb7ab54c0871a89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19bc77c7df26460956b1930627545438689a654015d738eae88df9c3c43bd890\"" Aug 13 00:02:38.213028 containerd[1536]: time="2025-08-13T00:02:38.212899061Z" level=info msg="StartContainer for \"19bc77c7df26460956b1930627545438689a654015d738eae88df9c3c43bd890\"" Aug 13 00:02:38.269503 containerd[1536]: time="2025-08-13T00:02:38.269369012Z" level=info msg="StartContainer for \"38b7ff0f5d39aeaea36f6780fc954b46287df17a08cb79b3b8df48c5865be59e\" returns successfully" Aug 13 00:02:38.279509 containerd[1536]: time="2025-08-13T00:02:38.278748664Z" level=info msg="StartContainer for \"19bc77c7df26460956b1930627545438689a654015d738eae88df9c3c43bd890\" returns successfully" Aug 13 00:02:38.363102 kubelet[2622]: E0813 00:02:38.355706 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:38.373346 kubelet[2622]: E0813 00:02:38.363991 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:38.382831 kubelet[2622]: I0813 00:02:38.381306 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vjf4t" podStartSLOduration=18.381290004 podStartE2EDuration="18.381290004s" podCreationTimestamp="2025-08-13 00:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:02:38.381052407 +0000 UTC m=+26.268478460" watchObservedRunningTime="2025-08-13 00:02:38.381290004 +0000 UTC m=+26.268716057" Aug 13 00:02:38.407156 kubelet[2622]: I0813 00:02:38.402032 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-l6kg6" podStartSLOduration=18.401833648 podStartE2EDuration="18.401833648s" podCreationTimestamp="2025-08-13 00:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:02:38.40159733 +0000 UTC m=+26.289023383" watchObservedRunningTime="2025-08-13 00:02:38.401833648 +0000 UTC m=+26.289259701" Aug 13 00:02:39.417113 kubelet[2622]: E0813 00:02:39.415115 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:39.417113 kubelet[2622]: E0813 00:02:39.415420 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:40.417072 kubelet[2622]: E0813 00:02:40.417028 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:40.420581 kubelet[2622]: E0813 00:02:40.417781 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:42.897093 kubelet[2622]: I0813 00:02:42.897025 2622 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:02:42.897535 kubelet[2622]: E0813 00:02:42.897457 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:43.447712 kubelet[2622]: E0813 00:02:43.447628 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:02:43.516766 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:49768.service - OpenSSH per-connection server daemon (10.0.0.1:49768). Aug 13 00:02:43.554588 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 49768 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:02:43.556499 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:43.561449 systemd-logind[1511]: New session 8 of user core. Aug 13 00:02:43.569823 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:02:43.729187 sshd[4032]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:43.734361 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:49768.service: Deactivated successfully. Aug 13 00:02:43.736644 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:02:43.736822 systemd-logind[1511]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:02:43.738735 systemd-logind[1511]: Removed session 8. Aug 13 00:02:48.741768 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:49794.service - OpenSSH per-connection server daemon (10.0.0.1:49794). Aug 13 00:02:48.780850 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 49794 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:02:48.783866 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:48.789036 systemd-logind[1511]: New session 9 of user core. Aug 13 00:02:48.797800 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:02:48.952581 sshd[4049]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:48.956186 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:49794.service: Deactivated successfully. Aug 13 00:02:48.961177 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:02:48.963329 systemd-logind[1511]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:02:48.964648 systemd-logind[1511]: Removed session 9. Aug 13 00:02:53.968976 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:57588.service - OpenSSH per-connection server daemon (10.0.0.1:57588). Aug 13 00:02:54.004168 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 57588 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:02:54.005661 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:54.012869 systemd-logind[1511]: New session 10 of user core. Aug 13 00:02:54.023817 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:02:54.162664 sshd[4068]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:54.181623 systemd-logind[1511]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:02:54.184603 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:57588.service: Deactivated successfully. Aug 13 00:02:54.189066 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:02:54.190875 systemd-logind[1511]: Removed session 10. Aug 13 00:02:59.176790 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:57626.service - OpenSSH per-connection server daemon (10.0.0.1:57626). Aug 13 00:02:59.215005 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 57626 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:02:59.216928 sshd[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:59.224331 systemd-logind[1511]: New session 11 of user core. Aug 13 00:02:59.230855 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:02:59.389652 sshd[4084]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:59.409814 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:57632.service - OpenSSH per-connection server daemon (10.0.0.1:57632). Aug 13 00:02:59.410269 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:57626.service: Deactivated successfully. Aug 13 00:02:59.414409 systemd-logind[1511]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:02:59.416496 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:02:59.418924 systemd-logind[1511]: Removed session 11. Aug 13 00:02:59.450821 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 57632 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:02:59.452623 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:59.458015 systemd-logind[1511]: New session 12 of user core. Aug 13 00:02:59.466833 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:02:59.665085 sshd[4097]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:59.680363 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:57648.service - OpenSSH per-connection server daemon (10.0.0.1:57648). Aug 13 00:02:59.681994 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:57632.service: Deactivated successfully. Aug 13 00:02:59.687651 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:02:59.702803 systemd-logind[1511]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:02:59.708083 systemd-logind[1511]: Removed session 12. Aug 13 00:02:59.746692 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 57648 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:02:59.748089 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:59.755691 systemd-logind[1511]: New session 13 of user core. Aug 13 00:02:59.763843 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:02:59.904795 sshd[4112]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:59.908072 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:57648.service: Deactivated successfully. Aug 13 00:02:59.911368 systemd-logind[1511]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:02:59.911829 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:02:59.913090 systemd-logind[1511]: Removed session 13. Aug 13 00:03:04.926869 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:60106.service - OpenSSH per-connection server daemon (10.0.0.1:60106). Aug 13 00:03:04.959693 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 60106 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:04.961565 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:04.985913 systemd-logind[1511]: New session 14 of user core. Aug 13 00:03:04.994800 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:03:05.114171 sshd[4129]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:05.117759 systemd-logind[1511]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:03:05.118773 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:60106.service: Deactivated successfully. Aug 13 00:03:05.121099 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:03:05.122099 systemd-logind[1511]: Removed session 14. Aug 13 00:03:10.128008 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:60174.service - OpenSSH per-connection server daemon (10.0.0.1:60174). Aug 13 00:03:10.164676 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 60174 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:10.165180 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:10.170662 systemd-logind[1511]: New session 15 of user core. Aug 13 00:03:10.177797 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:03:10.303859 sshd[4146]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:10.306772 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:60174.service: Deactivated successfully. Aug 13 00:03:10.312774 systemd-logind[1511]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:03:10.325983 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:60182.service - OpenSSH per-connection server daemon (10.0.0.1:60182). Aug 13 00:03:10.326299 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:03:10.327873 systemd-logind[1511]: Removed session 15. Aug 13 00:03:10.357512 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 60182 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:10.358995 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:10.362875 systemd-logind[1511]: New session 16 of user core. Aug 13 00:03:10.374802 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:03:10.626794 sshd[4161]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:10.631750 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:60194.service - OpenSSH per-connection server daemon (10.0.0.1:60194). Aug 13 00:03:10.632189 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:60182.service: Deactivated successfully. Aug 13 00:03:10.636531 systemd-logind[1511]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:03:10.637279 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:03:10.640637 systemd-logind[1511]: Removed session 16. Aug 13 00:03:10.675825 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 60194 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:10.677285 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:10.681601 systemd-logind[1511]: New session 17 of user core. Aug 13 00:03:10.690777 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:03:12.000878 sshd[4171]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:12.018530 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:60198.service - OpenSSH per-connection server daemon (10.0.0.1:60198). Aug 13 00:03:12.018960 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:60194.service: Deactivated successfully. Aug 13 00:03:12.028999 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:03:12.029236 systemd-logind[1511]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:03:12.032650 systemd-logind[1511]: Removed session 17. Aug 13 00:03:12.073668 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 60198 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:12.077171 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:12.090666 systemd-logind[1511]: New session 18 of user core. Aug 13 00:03:12.100818 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:03:12.357379 sshd[4197]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:12.365852 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:60206.service - OpenSSH per-connection server daemon (10.0.0.1:60206). Aug 13 00:03:12.367118 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:60198.service: Deactivated successfully. Aug 13 00:03:12.370414 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:03:12.371532 systemd-logind[1511]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:03:12.373606 systemd-logind[1511]: Removed session 18. Aug 13 00:03:12.401089 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 60206 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:12.402542 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:12.407712 systemd-logind[1511]: New session 19 of user core. Aug 13 00:03:12.422838 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:03:12.568002 sshd[4214]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:12.573802 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:60206.service: Deactivated successfully. Aug 13 00:03:12.577174 systemd-logind[1511]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:03:12.577420 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:03:12.579344 systemd-logind[1511]: Removed session 19. Aug 13 00:03:17.577697 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:57536.service - OpenSSH per-connection server daemon (10.0.0.1:57536). Aug 13 00:03:17.613419 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 57536 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:17.614747 sshd[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:17.618499 systemd-logind[1511]: New session 20 of user core. Aug 13 00:03:17.628722 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:03:17.739175 sshd[4235]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:17.743081 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:57536.service: Deactivated successfully. Aug 13 00:03:17.745626 systemd-logind[1511]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:03:17.745721 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:03:17.746828 systemd-logind[1511]: Removed session 20. Aug 13 00:03:22.754820 systemd[1]: Started sshd@20-10.0.0.48:22-10.0.0.1:36266.service - OpenSSH per-connection server daemon (10.0.0.1:36266). Aug 13 00:03:22.791506 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 36266 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:22.792744 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:22.801220 systemd-logind[1511]: New session 21 of user core. Aug 13 00:03:22.814948 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:03:22.933948 sshd[4252]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:22.944412 systemd[1]: sshd@20-10.0.0.48:22-10.0.0.1:36266.service: Deactivated successfully. Aug 13 00:03:22.947727 systemd-logind[1511]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:03:22.948036 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:03:22.949408 systemd-logind[1511]: Removed session 21. Aug 13 00:03:27.952746 systemd[1]: Started sshd@21-10.0.0.48:22-10.0.0.1:36282.service - OpenSSH per-connection server daemon (10.0.0.1:36282). Aug 13 00:03:27.986422 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 36282 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:27.987813 sshd[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:27.991970 systemd-logind[1511]: New session 22 of user core. Aug 13 00:03:28.000853 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:03:28.128372 sshd[4267]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:28.132527 systemd[1]: sshd@21-10.0.0.48:22-10.0.0.1:36282.service: Deactivated successfully. Aug 13 00:03:28.134940 systemd-logind[1511]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:03:28.135863 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:03:28.136908 systemd-logind[1511]: Removed session 22. Aug 13 00:03:33.146674 systemd[1]: Started sshd@22-10.0.0.48:22-10.0.0.1:55598.service - OpenSSH per-connection server daemon (10.0.0.1:55598). Aug 13 00:03:33.184576 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 55598 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:33.186377 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:33.191015 systemd-logind[1511]: New session 23 of user core. Aug 13 00:03:33.196771 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:03:33.341343 sshd[4282]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:33.357482 systemd[1]: Started sshd@23-10.0.0.48:22-10.0.0.1:55606.service - OpenSSH per-connection server daemon (10.0.0.1:55606). Aug 13 00:03:33.358774 systemd[1]: sshd@22-10.0.0.48:22-10.0.0.1:55598.service: Deactivated successfully. Aug 13 00:03:33.363756 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:03:33.367407 systemd-logind[1511]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:03:33.369465 systemd-logind[1511]: Removed session 23. Aug 13 00:03:33.388088 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 55606 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:33.389839 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:33.397029 systemd-logind[1511]: New session 24 of user core. Aug 13 00:03:33.413770 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:03:35.023692 containerd[1536]: time="2025-08-13T00:03:35.023627928Z" level=info msg="StopContainer for \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\" with timeout 30 (s)" Aug 13 00:03:35.025577 containerd[1536]: time="2025-08-13T00:03:35.025546022Z" level=info msg="Stop container \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\" with signal terminated" Aug 13 00:03:35.059565 containerd[1536]: time="2025-08-13T00:03:35.058635227Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:03:35.059397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a-rootfs.mount: Deactivated successfully. Aug 13 00:03:35.060332 containerd[1536]: time="2025-08-13T00:03:35.060169599Z" level=info msg="StopContainer for \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\" with timeout 2 (s)" Aug 13 00:03:35.060804 containerd[1536]: time="2025-08-13T00:03:35.060532441Z" level=info msg="Stop container \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\" with signal terminated" Aug 13 00:03:35.063811 containerd[1536]: time="2025-08-13T00:03:35.063764305Z" level=info msg="shim disconnected" id=9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a namespace=k8s.io Aug 13 00:03:35.063811 containerd[1536]: time="2025-08-13T00:03:35.063808986Z" level=warning msg="cleaning up after shim disconnected" id=9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a namespace=k8s.io Aug 13 00:03:35.063923 containerd[1536]: time="2025-08-13T00:03:35.063818586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:35.069009 systemd-networkd[1229]: lxc_health: Link DOWN Aug 13 00:03:35.069014 systemd-networkd[1229]: lxc_health: Lost carrier Aug 13 00:03:35.113038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab-rootfs.mount: Deactivated successfully. Aug 13 00:03:35.117795 containerd[1536]: time="2025-08-13T00:03:35.117695584Z" level=info msg="StopContainer for \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\" returns successfully" Aug 13 00:03:35.118935 containerd[1536]: time="2025-08-13T00:03:35.118834313Z" level=info msg="StopPodSandbox for \"13dadff532615f5d87347ee6df77e353360ac745209c358b9215fe1f0c6314b7\"" Aug 13 00:03:35.119122 containerd[1536]: time="2025-08-13T00:03:35.118871553Z" level=info msg="Container to stop \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:03:35.119122 containerd[1536]: time="2025-08-13T00:03:35.119034754Z" level=info msg="shim disconnected" id=a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab namespace=k8s.io Aug 13 00:03:35.119122 containerd[1536]: time="2025-08-13T00:03:35.119082155Z" level=warning msg="cleaning up after shim disconnected" id=a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab namespace=k8s.io Aug 13 00:03:35.119122 containerd[1536]: time="2025-08-13T00:03:35.119090395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:35.120809 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13dadff532615f5d87347ee6df77e353360ac745209c358b9215fe1f0c6314b7-shm.mount: Deactivated successfully. Aug 13 00:03:35.140959 containerd[1536]: time="2025-08-13T00:03:35.140919796Z" level=info msg="StopContainer for \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\" returns successfully" Aug 13 00:03:35.141620 containerd[1536]: time="2025-08-13T00:03:35.141574081Z" level=info msg="StopPodSandbox for \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\"" Aug 13 00:03:35.141705 containerd[1536]: time="2025-08-13T00:03:35.141650482Z" level=info msg="Container to stop \"11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:03:35.141705 containerd[1536]: time="2025-08-13T00:03:35.141665802Z" level=info msg="Container to stop \"8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:03:35.141705 containerd[1536]: time="2025-08-13T00:03:35.141690042Z" level=info msg="Container to stop \"f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:03:35.141705 containerd[1536]: time="2025-08-13T00:03:35.141699962Z" level=info msg="Container to stop \"9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:03:35.141817 containerd[1536]: time="2025-08-13T00:03:35.141709762Z" level=info msg="Container to stop \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:03:35.144130 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705-shm.mount: Deactivated successfully. Aug 13 00:03:35.162317 containerd[1536]: time="2025-08-13T00:03:35.162261994Z" level=info msg="shim disconnected" id=13dadff532615f5d87347ee6df77e353360ac745209c358b9215fe1f0c6314b7 namespace=k8s.io Aug 13 00:03:35.162508 containerd[1536]: time="2025-08-13T00:03:35.162334275Z" level=warning msg="cleaning up after shim disconnected" id=13dadff532615f5d87347ee6df77e353360ac745209c358b9215fe1f0c6314b7 namespace=k8s.io Aug 13 00:03:35.162508 containerd[1536]: time="2025-08-13T00:03:35.162347315Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:35.175165 containerd[1536]: time="2025-08-13T00:03:35.175067569Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:03:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:03:35.176393 containerd[1536]: time="2025-08-13T00:03:35.176362538Z" level=info msg="TearDown network for sandbox \"13dadff532615f5d87347ee6df77e353360ac745209c358b9215fe1f0c6314b7\" successfully" Aug 13 00:03:35.176393 containerd[1536]: time="2025-08-13T00:03:35.176393299Z" level=info msg="StopPodSandbox for \"13dadff532615f5d87347ee6df77e353360ac745209c358b9215fe1f0c6314b7\" returns successfully" Aug 13 00:03:35.177286 containerd[1536]: time="2025-08-13T00:03:35.177238745Z" level=info msg="shim disconnected" id=1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705 namespace=k8s.io Aug 13 00:03:35.177286 containerd[1536]: time="2025-08-13T00:03:35.177284665Z" level=warning msg="cleaning up after shim disconnected" id=1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705 namespace=k8s.io Aug 13 00:03:35.177380 containerd[1536]: time="2025-08-13T00:03:35.177292825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:35.192275 containerd[1536]: time="2025-08-13T00:03:35.192234776Z" level=info msg="TearDown network for sandbox \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\" successfully" Aug 13 00:03:35.192518 containerd[1536]: time="2025-08-13T00:03:35.192408497Z" level=info msg="StopPodSandbox for \"1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705\" returns successfully" Aug 13 00:03:35.357072 kubelet[2622]: I0813 00:03:35.356523 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-cilium-cgroup\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357072 kubelet[2622]: I0813 00:03:35.356571 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c385021-e474-493a-b72e-5249c52d7ce5-hubble-tls\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357072 kubelet[2622]: I0813 00:03:35.356587 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-host-proc-sys-net\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357072 kubelet[2622]: I0813 00:03:35.356632 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-cni-path\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357072 kubelet[2622]: I0813 00:03:35.356647 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-etc-cni-netd\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357072 kubelet[2622]: I0813 00:03:35.356665 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c385021-e474-493a-b72e-5249c52d7ce5-clustermesh-secrets\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357578 kubelet[2622]: I0813 00:03:35.356678 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-bpf-maps\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357578 kubelet[2622]: I0813 00:03:35.356693 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-xtables-lock\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357578 kubelet[2622]: I0813 00:03:35.356707 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-lib-modules\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357578 kubelet[2622]: I0813 00:03:35.356723 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ab45edf-35b4-4275-b91d-b51c4f1e4e50-cilium-config-path\") pod \"9ab45edf-35b4-4275-b91d-b51c4f1e4e50\" (UID: \"9ab45edf-35b4-4275-b91d-b51c4f1e4e50\") " Aug 13 00:03:35.357578 kubelet[2622]: I0813 00:03:35.356742 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx6gj\" (UniqueName: \"kubernetes.io/projected/6c385021-e474-493a-b72e-5249c52d7ce5-kube-api-access-tx6gj\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357578 kubelet[2622]: I0813 00:03:35.356794 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-host-proc-sys-kernel\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357749 kubelet[2622]: I0813 00:03:35.356811 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-cilium-run\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357749 kubelet[2622]: I0813 00:03:35.356827 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-hostproc\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357749 kubelet[2622]: I0813 00:03:35.356843 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c385021-e474-493a-b72e-5249c52d7ce5-cilium-config-path\") pod \"6c385021-e474-493a-b72e-5249c52d7ce5\" (UID: \"6c385021-e474-493a-b72e-5249c52d7ce5\") " Aug 13 00:03:35.357749 kubelet[2622]: I0813 00:03:35.356863 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zkq2\" (UniqueName: \"kubernetes.io/projected/9ab45edf-35b4-4275-b91d-b51c4f1e4e50-kube-api-access-5zkq2\") pod \"9ab45edf-35b4-4275-b91d-b51c4f1e4e50\" (UID: \"9ab45edf-35b4-4275-b91d-b51c4f1e4e50\") " Aug 13 00:03:35.361927 kubelet[2622]: I0813 00:03:35.360776 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:03:35.361927 kubelet[2622]: I0813 00:03:35.360903 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:03:35.361927 kubelet[2622]: I0813 00:03:35.361432 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:03:35.361927 kubelet[2622]: I0813 00:03:35.361498 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:03:35.361927 kubelet[2622]: I0813 00:03:35.361515 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-cni-path" (OuterVolumeSpecName: "cni-path") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:03:35.362123 kubelet[2622]: I0813 00:03:35.361531 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:03:35.362123 kubelet[2622]: I0813 00:03:35.361552 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:03:35.362212 kubelet[2622]: I0813 00:03:35.362186 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:03:35.362311 kubelet[2622]: I0813 00:03:35.362297 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-hostproc" (OuterVolumeSpecName: "hostproc") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:03:35.362389 kubelet[2622]: I0813 00:03:35.362377 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:03:35.363591 kubelet[2622]: I0813 00:03:35.363544 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ab45edf-35b4-4275-b91d-b51c4f1e4e50-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9ab45edf-35b4-4275-b91d-b51c4f1e4e50" (UID: "9ab45edf-35b4-4275-b91d-b51c4f1e4e50"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:03:35.364246 kubelet[2622]: I0813 00:03:35.364206 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c385021-e474-493a-b72e-5249c52d7ce5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:03:35.364740 kubelet[2622]: I0813 00:03:35.364708 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c385021-e474-493a-b72e-5249c52d7ce5-kube-api-access-tx6gj" (OuterVolumeSpecName: "kube-api-access-tx6gj") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "kube-api-access-tx6gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:03:35.364787 kubelet[2622]: I0813 00:03:35.364741 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c385021-e474-493a-b72e-5249c52d7ce5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:03:35.364938 kubelet[2622]: I0813 00:03:35.364908 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ab45edf-35b4-4275-b91d-b51c4f1e4e50-kube-api-access-5zkq2" (OuterVolumeSpecName: "kube-api-access-5zkq2") pod "9ab45edf-35b4-4275-b91d-b51c4f1e4e50" (UID: "9ab45edf-35b4-4275-b91d-b51c4f1e4e50"). InnerVolumeSpecName "kube-api-access-5zkq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:03:35.365083 kubelet[2622]: I0813 00:03:35.365065 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c385021-e474-493a-b72e-5249c52d7ce5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c385021-e474-493a-b72e-5249c52d7ce5" (UID: "6c385021-e474-493a-b72e-5249c52d7ce5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:03:35.459930 kubelet[2622]: I0813 00:03:35.459342 2622 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.459930 kubelet[2622]: I0813 00:03:35.459380 2622 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c385021-e474-493a-b72e-5249c52d7ce5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.459930 kubelet[2622]: I0813 00:03:35.459393 2622 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zkq2\" (UniqueName: \"kubernetes.io/projected/9ab45edf-35b4-4275-b91d-b51c4f1e4e50-kube-api-access-5zkq2\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.459930 kubelet[2622]: I0813 00:03:35.459402 2622 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.459930 kubelet[2622]: I0813 00:03:35.459412 2622 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.459930 kubelet[2622]: I0813 00:03:35.459420 2622 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c385021-e474-493a-b72e-5249c52d7ce5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.459930 kubelet[2622]: I0813 00:03:35.459427 2622 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.459930 kubelet[2622]: I0813 00:03:35.459472 2622 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.460233 kubelet[2622]: I0813 00:03:35.459481 2622 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c385021-e474-493a-b72e-5249c52d7ce5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.460233 kubelet[2622]: I0813 00:03:35.459489 2622 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.460233 kubelet[2622]: I0813 00:03:35.459497 2622 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ab45edf-35b4-4275-b91d-b51c4f1e4e50-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.460233 kubelet[2622]: I0813 00:03:35.459505 2622 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.460233 kubelet[2622]: I0813 00:03:35.459512 2622 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.460233 kubelet[2622]: I0813 00:03:35.459520 2622 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx6gj\" (UniqueName: \"kubernetes.io/projected/6c385021-e474-493a-b72e-5249c52d7ce5-kube-api-access-tx6gj\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.460233 kubelet[2622]: I0813 00:03:35.459527 2622 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.460233 kubelet[2622]: I0813 00:03:35.459535 2622 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c385021-e474-493a-b72e-5249c52d7ce5-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:03:35.554889 kubelet[2622]: I0813 00:03:35.554663 2622 scope.go:117] "RemoveContainer" containerID="9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a" Aug 13 00:03:35.556777 containerd[1536]: time="2025-08-13T00:03:35.556363150Z" level=info msg="RemoveContainer for \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\"" Aug 13 00:03:35.559487 containerd[1536]: time="2025-08-13T00:03:35.559452813Z" level=info msg="RemoveContainer for \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\" returns successfully" Aug 13 00:03:35.560187 kubelet[2622]: I0813 00:03:35.560159 2622 scope.go:117] "RemoveContainer" containerID="9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a" Aug 13 00:03:35.560413 containerd[1536]: time="2025-08-13T00:03:35.560346260Z" level=error msg="ContainerStatus for \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\": not found" Aug 13 00:03:35.566797 kubelet[2622]: E0813 00:03:35.566736 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\": not found" containerID="9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a" Aug 13 00:03:35.566920 kubelet[2622]: I0813 00:03:35.566814 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a"} err="failed to get container status \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c0f7537b7ac1de294d31ec6a81badbdbebf78b68c7f687b4d954f65c704ee0a\": not found" Aug 13 00:03:35.566946 kubelet[2622]: I0813 00:03:35.566919 2622 scope.go:117] "RemoveContainer" containerID="a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab" Aug 13 00:03:35.568344 containerd[1536]: time="2025-08-13T00:03:35.568254438Z" level=info msg="RemoveContainer for \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\"" Aug 13 00:03:35.578258 containerd[1536]: time="2025-08-13T00:03:35.578209272Z" level=info msg="RemoveContainer for \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\" returns successfully" Aug 13 00:03:35.578526 kubelet[2622]: I0813 00:03:35.578497 2622 scope.go:117] "RemoveContainer" containerID="9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3" Aug 13 00:03:35.579887 containerd[1536]: time="2025-08-13T00:03:35.579833084Z" level=info msg="RemoveContainer for \"9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3\"" Aug 13 00:03:35.606228 containerd[1536]: time="2025-08-13T00:03:35.606168279Z" level=info msg="RemoveContainer for \"9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3\" returns successfully" Aug 13 00:03:35.606576 kubelet[2622]: I0813 00:03:35.606425 2622 scope.go:117] "RemoveContainer" containerID="11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277" Aug 13 00:03:35.607683 containerd[1536]: time="2025-08-13T00:03:35.607583649Z" level=info msg="RemoveContainer for \"11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277\"" Aug 13 00:03:35.632636 containerd[1536]: time="2025-08-13T00:03:35.632588114Z" level=info msg="RemoveContainer for \"11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277\" returns successfully" Aug 13 00:03:35.633482 kubelet[2622]: I0813 00:03:35.633399 2622 scope.go:117] "RemoveContainer" containerID="f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2" Aug 13 00:03:35.634428 containerd[1536]: time="2025-08-13T00:03:35.634403888Z" level=info msg="RemoveContainer for \"f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2\"" Aug 13 00:03:35.653273 containerd[1536]: time="2025-08-13T00:03:35.653193747Z" level=info msg="RemoveContainer for \"f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2\" returns successfully" Aug 13 00:03:35.653613 kubelet[2622]: I0813 00:03:35.653558 2622 scope.go:117] "RemoveContainer" containerID="8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3" Aug 13 00:03:35.654754 containerd[1536]: time="2025-08-13T00:03:35.654714598Z" level=info msg="RemoveContainer for \"8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3\"" Aug 13 00:03:35.657127 containerd[1536]: time="2025-08-13T00:03:35.657075696Z" level=info msg="RemoveContainer for \"8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3\" returns successfully" Aug 13 00:03:35.657335 kubelet[2622]: I0813 00:03:35.657311 2622 scope.go:117] "RemoveContainer" containerID="a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab" Aug 13 00:03:35.657571 containerd[1536]: time="2025-08-13T00:03:35.657527259Z" level=error msg="ContainerStatus for \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\": not found" Aug 13 00:03:35.657679 kubelet[2622]: E0813 00:03:35.657659 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\": not found" containerID="a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab" Aug 13 00:03:35.657737 kubelet[2622]: I0813 00:03:35.657703 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab"} err="failed to get container status \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2c75103901c6c2d00e11b3f2c401c4f5bb8ded92afb379ba7560f8101d499ab\": not found" Aug 13 00:03:35.657737 kubelet[2622]: I0813 00:03:35.657724 2622 scope.go:117] "RemoveContainer" containerID="9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3" Aug 13 00:03:35.657890 containerd[1536]: time="2025-08-13T00:03:35.657857781Z" level=error msg="ContainerStatus for \"9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3\": not found" Aug 13 00:03:35.657987 kubelet[2622]: E0813 00:03:35.657971 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3\": not found" containerID="9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3" Aug 13 00:03:35.658031 kubelet[2622]: I0813 00:03:35.657991 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3"} err="failed to get container status \"9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e3e08ce7517861216938b5a399c0371e24aaeb36b7610a75648dc286a7d6fa3\": not found" Aug 13 00:03:35.658031 kubelet[2622]: I0813 00:03:35.658002 2622 scope.go:117] "RemoveContainer" containerID="11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277" Aug 13 00:03:35.658161 containerd[1536]: time="2025-08-13T00:03:35.658134463Z" level=error msg="ContainerStatus for \"11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277\": not found" Aug 13 00:03:35.658249 kubelet[2622]: E0813 00:03:35.658233 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277\": not found" containerID="11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277" Aug 13 00:03:35.658295 kubelet[2622]: I0813 00:03:35.658252 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277"} err="failed to get container status \"11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277\": rpc error: code = NotFound desc = an error occurred when try to find container \"11fcc3fdf47b7d83d3d855255e846b879c6dee007256b7751b502ee628726277\": not found" Aug 13 00:03:35.658335 kubelet[2622]: I0813 00:03:35.658287 2622 scope.go:117] "RemoveContainer" containerID="f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2" Aug 13 00:03:35.658479 containerd[1536]: time="2025-08-13T00:03:35.658412946Z" level=error msg="ContainerStatus for \"f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2\": not found" Aug 13 00:03:35.658589 kubelet[2622]: E0813 00:03:35.658562 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2\": not found" containerID="f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2" Aug 13 00:03:35.658589 kubelet[2622]: I0813 00:03:35.658577 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2"} err="failed to get container status \"f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2\": rpc error: code = NotFound desc = an error occurred when try to find container \"f38ced9feacdb319886239383c574e998bbce846eab5fa376a3391a89cd07ea2\": not found" Aug 13 00:03:35.658589 kubelet[2622]: I0813 00:03:35.658588 2622 scope.go:117] "RemoveContainer" containerID="8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3" Aug 13 00:03:35.658704 containerd[1536]: time="2025-08-13T00:03:35.658684108Z" level=error msg="ContainerStatus for \"8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3\": not found" Aug 13 00:03:35.658864 kubelet[2622]: E0813 00:03:35.658781 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3\": not found" containerID="8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3" Aug 13 00:03:35.658864 kubelet[2622]: I0813 00:03:35.658807 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3"} err="failed to get container status \"8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"8559afa87899ee3345685653017a48f8a7debb2960017abe3a8f7cdb06dd21f3\": not found" Aug 13 00:03:36.035608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13dadff532615f5d87347ee6df77e353360ac745209c358b9215fe1f0c6314b7-rootfs.mount: Deactivated successfully. Aug 13 00:03:36.035765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e45bca72b108b479ad010990a0a207d4fb6f2dc3b4c399839ed6afc0c355705-rootfs.mount: Deactivated successfully. Aug 13 00:03:36.035854 systemd[1]: var-lib-kubelet-pods-9ab45edf\x2d35b4\x2d4275\x2db91d\x2db51c4f1e4e50-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5zkq2.mount: Deactivated successfully. Aug 13 00:03:36.035946 systemd[1]: var-lib-kubelet-pods-6c385021\x2de474\x2d493a\x2db72e\x2d5249c52d7ce5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtx6gj.mount: Deactivated successfully. Aug 13 00:03:36.036049 systemd[1]: var-lib-kubelet-pods-6c385021\x2de474\x2d493a\x2db72e\x2d5249c52d7ce5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:03:36.036149 systemd[1]: var-lib-kubelet-pods-6c385021\x2de474\x2d493a\x2db72e\x2d5249c52d7ce5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:03:36.220788 kubelet[2622]: E0813 00:03:36.220694 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:36.222618 kubelet[2622]: I0813 00:03:36.222587 2622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c385021-e474-493a-b72e-5249c52d7ce5" path="/var/lib/kubelet/pods/6c385021-e474-493a-b72e-5249c52d7ce5/volumes" Aug 13 00:03:36.223176 kubelet[2622]: I0813 00:03:36.223141 2622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ab45edf-35b4-4275-b91d-b51c4f1e4e50" path="/var/lib/kubelet/pods/9ab45edf-35b4-4275-b91d-b51c4f1e4e50/volumes" Aug 13 00:03:36.956213 sshd[4295]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:36.969777 systemd[1]: Started sshd@24-10.0.0.48:22-10.0.0.1:55614.service - OpenSSH per-connection server daemon (10.0.0.1:55614). Aug 13 00:03:36.970196 systemd[1]: sshd@23-10.0.0.48:22-10.0.0.1:55606.service: Deactivated successfully. Aug 13 00:03:36.975089 systemd-logind[1511]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:03:36.977040 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:03:36.981672 systemd-logind[1511]: Removed session 24. Aug 13 00:03:37.010306 sshd[4466]: Accepted publickey for core from 10.0.0.1 port 55614 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:37.012172 sshd[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:37.017488 systemd-logind[1511]: New session 25 of user core. Aug 13 00:03:37.030757 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:03:37.300083 kubelet[2622]: E0813 00:03:37.299944 2622 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:03:38.018640 sshd[4466]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:38.025875 systemd[1]: Started sshd@25-10.0.0.48:22-10.0.0.1:55620.service - OpenSSH per-connection server daemon (10.0.0.1:55620). Aug 13 00:03:38.028557 systemd[1]: sshd@24-10.0.0.48:22-10.0.0.1:55614.service: Deactivated successfully. Aug 13 00:03:38.036758 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:03:38.039182 systemd-logind[1511]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:03:38.042118 kubelet[2622]: E0813 00:03:38.042071 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c385021-e474-493a-b72e-5249c52d7ce5" containerName="apply-sysctl-overwrites" Aug 13 00:03:38.042118 kubelet[2622]: E0813 00:03:38.042107 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c385021-e474-493a-b72e-5249c52d7ce5" containerName="mount-cgroup" Aug 13 00:03:38.042118 kubelet[2622]: E0813 00:03:38.042126 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c385021-e474-493a-b72e-5249c52d7ce5" containerName="mount-bpf-fs" Aug 13 00:03:38.042871 kubelet[2622]: E0813 00:03:38.042132 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ab45edf-35b4-4275-b91d-b51c4f1e4e50" containerName="cilium-operator" Aug 13 00:03:38.042871 kubelet[2622]: E0813 00:03:38.042138 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c385021-e474-493a-b72e-5249c52d7ce5" containerName="clean-cilium-state" Aug 13 00:03:38.042871 kubelet[2622]: E0813 00:03:38.042143 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c385021-e474-493a-b72e-5249c52d7ce5" containerName="cilium-agent" Aug 13 00:03:38.042871 kubelet[2622]: I0813 00:03:38.042165 2622 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c385021-e474-493a-b72e-5249c52d7ce5" containerName="cilium-agent" Aug 13 00:03:38.042871 kubelet[2622]: I0813 00:03:38.042171 2622 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ab45edf-35b4-4275-b91d-b51c4f1e4e50" containerName="cilium-operator" Aug 13 00:03:38.044344 systemd-logind[1511]: Removed session 25. Aug 13 00:03:38.079013 sshd[4480]: Accepted publickey for core from 10.0.0.1 port 55620 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:38.080351 sshd[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:38.085104 systemd-logind[1511]: New session 26 of user core. Aug 13 00:03:38.091725 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:03:38.142665 sshd[4480]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:38.153597 systemd[1]: Started sshd@26-10.0.0.48:22-10.0.0.1:55636.service - OpenSSH per-connection server daemon (10.0.0.1:55636). Aug 13 00:03:38.154349 systemd[1]: sshd@25-10.0.0.48:22-10.0.0.1:55620.service: Deactivated successfully. Aug 13 00:03:38.155879 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:03:38.156604 systemd-logind[1511]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:03:38.158189 systemd-logind[1511]: Removed session 26. Aug 13 00:03:38.174973 kubelet[2622]: I0813 00:03:38.174937 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02a0e396-fd98-48c4-bf9d-5df6f207fa54-hostproc\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.174973 kubelet[2622]: I0813 00:03:38.174979 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6v88\" (UniqueName: \"kubernetes.io/projected/02a0e396-fd98-48c4-bf9d-5df6f207fa54-kube-api-access-l6v88\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.175339 kubelet[2622]: I0813 00:03:38.175000 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02a0e396-fd98-48c4-bf9d-5df6f207fa54-xtables-lock\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.175339 kubelet[2622]: I0813 00:03:38.175019 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02a0e396-fd98-48c4-bf9d-5df6f207fa54-clustermesh-secrets\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.175339 kubelet[2622]: I0813 00:03:38.175088 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02a0e396-fd98-48c4-bf9d-5df6f207fa54-cilium-run\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.175339 kubelet[2622]: I0813 00:03:38.175141 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02a0e396-fd98-48c4-bf9d-5df6f207fa54-cilium-cgroup\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.175339 kubelet[2622]: I0813 00:03:38.175163 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02a0e396-fd98-48c4-bf9d-5df6f207fa54-host-proc-sys-net\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.175339 kubelet[2622]: I0813 00:03:38.175184 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02a0e396-fd98-48c4-bf9d-5df6f207fa54-bpf-maps\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.175497 kubelet[2622]: I0813 00:03:38.175200 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02a0e396-fd98-48c4-bf9d-5df6f207fa54-hubble-tls\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.175497 kubelet[2622]: I0813 00:03:38.175217 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02a0e396-fd98-48c4-bf9d-5df6f207fa54-lib-modules\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.175497 kubelet[2622]: I0813 00:03:38.175269 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02a0e396-fd98-48c4-bf9d-5df6f207fa54-cilium-config-path\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.175497 kubelet[2622]: I0813 00:03:38.175318 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02a0e396-fd98-48c4-bf9d-5df6f207fa54-host-proc-sys-kernel\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.175497 kubelet[2622]: I0813 00:03:38.175410 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02a0e396-fd98-48c4-bf9d-5df6f207fa54-cni-path\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.175497 kubelet[2622]: I0813 00:03:38.175433 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02a0e396-fd98-48c4-bf9d-5df6f207fa54-etc-cni-netd\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.175619 kubelet[2622]: I0813 00:03:38.175463 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/02a0e396-fd98-48c4-bf9d-5df6f207fa54-cilium-ipsec-secrets\") pod \"cilium-rnssd\" (UID: \"02a0e396-fd98-48c4-bf9d-5df6f207fa54\") " pod="kube-system/cilium-rnssd" Aug 13 00:03:38.185908 sshd[4489]: Accepted publickey for core from 10.0.0.1 port 55636 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:03:38.186817 sshd[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:38.191403 systemd-logind[1511]: New session 27 of user core. Aug 13 00:03:38.203761 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:03:38.352643 kubelet[2622]: E0813 00:03:38.352530 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:38.353618 containerd[1536]: time="2025-08-13T00:03:38.353030327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rnssd,Uid:02a0e396-fd98-48c4-bf9d-5df6f207fa54,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:38.371723 containerd[1536]: time="2025-08-13T00:03:38.371168085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:38.371723 containerd[1536]: time="2025-08-13T00:03:38.371564528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:38.371723 containerd[1536]: time="2025-08-13T00:03:38.371582568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:38.371723 containerd[1536]: time="2025-08-13T00:03:38.371681569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:38.405287 containerd[1536]: time="2025-08-13T00:03:38.405251388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rnssd,Uid:02a0e396-fd98-48c4-bf9d-5df6f207fa54,Namespace:kube-system,Attempt:0,} returns sandbox id \"76032c4e33db795932f3d0446b3941331e6c87f8570f946f89aa9af290d65c8c\"" Aug 13 00:03:38.406175 kubelet[2622]: E0813 00:03:38.406150 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:38.408421 containerd[1536]: time="2025-08-13T00:03:38.408371088Z" level=info msg="CreateContainer within sandbox \"76032c4e33db795932f3d0446b3941331e6c87f8570f946f89aa9af290d65c8c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:03:38.421276 containerd[1536]: time="2025-08-13T00:03:38.421217652Z" level=info msg="CreateContainer within sandbox \"76032c4e33db795932f3d0446b3941331e6c87f8570f946f89aa9af290d65c8c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c0c18835a66d2886b2017787987f021df3e3152a0e0519944ac1f7a80e2b79d2\"" Aug 13 00:03:38.421733 containerd[1536]: time="2025-08-13T00:03:38.421703175Z" level=info msg="StartContainer for \"c0c18835a66d2886b2017787987f021df3e3152a0e0519944ac1f7a80e2b79d2\"" Aug 13 00:03:38.463269 containerd[1536]: time="2025-08-13T00:03:38.463229686Z" level=info msg="StartContainer for \"c0c18835a66d2886b2017787987f021df3e3152a0e0519944ac1f7a80e2b79d2\" returns successfully" Aug 13 00:03:38.518208 containerd[1536]: time="2025-08-13T00:03:38.518151685Z" level=info msg="shim disconnected" id=c0c18835a66d2886b2017787987f021df3e3152a0e0519944ac1f7a80e2b79d2 namespace=k8s.io Aug 13 00:03:38.518208 containerd[1536]: time="2025-08-13T00:03:38.518206605Z" level=warning msg="cleaning up after shim disconnected" id=c0c18835a66d2886b2017787987f021df3e3152a0e0519944ac1f7a80e2b79d2 namespace=k8s.io Aug 13 00:03:38.518208 containerd[1536]: time="2025-08-13T00:03:38.518215205Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:38.569617 kubelet[2622]: E0813 00:03:38.569583 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:38.579193 containerd[1536]: time="2025-08-13T00:03:38.579140963Z" level=info msg="CreateContainer within sandbox \"76032c4e33db795932f3d0446b3941331e6c87f8570f946f89aa9af290d65c8c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:03:38.593916 containerd[1536]: time="2025-08-13T00:03:38.593873659Z" level=info msg="CreateContainer within sandbox \"76032c4e33db795932f3d0446b3941331e6c87f8570f946f89aa9af290d65c8c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"feda6b8e8b49530b67e73787d05d313b5bac3dbd0fcb205895c584fd1f1be35e\"" Aug 13 00:03:38.597832 containerd[1536]: time="2025-08-13T00:03:38.597780365Z" level=info msg="StartContainer for \"feda6b8e8b49530b67e73787d05d313b5bac3dbd0fcb205895c584fd1f1be35e\"" Aug 13 00:03:38.649351 containerd[1536]: time="2025-08-13T00:03:38.649233180Z" level=info msg="StartContainer for \"feda6b8e8b49530b67e73787d05d313b5bac3dbd0fcb205895c584fd1f1be35e\" returns successfully" Aug 13 00:03:38.681174 containerd[1536]: time="2025-08-13T00:03:38.681111228Z" level=info msg="shim disconnected" id=feda6b8e8b49530b67e73787d05d313b5bac3dbd0fcb205895c584fd1f1be35e namespace=k8s.io Aug 13 00:03:38.681174 containerd[1536]: time="2025-08-13T00:03:38.681169149Z" level=warning msg="cleaning up after shim disconnected" id=feda6b8e8b49530b67e73787d05d313b5bac3dbd0fcb205895c584fd1f1be35e namespace=k8s.io Aug 13 00:03:38.681174 containerd[1536]: time="2025-08-13T00:03:38.681179229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:39.572734 kubelet[2622]: E0813 00:03:39.572692 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:39.578693 containerd[1536]: time="2025-08-13T00:03:39.578648129Z" level=info msg="CreateContainer within sandbox \"76032c4e33db795932f3d0446b3941331e6c87f8570f946f89aa9af290d65c8c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:03:39.589807 containerd[1536]: time="2025-08-13T00:03:39.589710199Z" level=info msg="CreateContainer within sandbox \"76032c4e33db795932f3d0446b3941331e6c87f8570f946f89aa9af290d65c8c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"97ad8b078dffa482028ac6d17ebd4d4ae050d0b8ba9c4013dccc312623701713\"" Aug 13 00:03:39.592924 containerd[1536]: time="2025-08-13T00:03:39.592562656Z" level=info msg="StartContainer for \"97ad8b078dffa482028ac6d17ebd4d4ae050d0b8ba9c4013dccc312623701713\"" Aug 13 00:03:39.641618 containerd[1536]: time="2025-08-13T00:03:39.641578363Z" level=info msg="StartContainer for \"97ad8b078dffa482028ac6d17ebd4d4ae050d0b8ba9c4013dccc312623701713\" returns successfully" Aug 13 00:03:39.665896 containerd[1536]: time="2025-08-13T00:03:39.665839475Z" level=info msg="shim disconnected" id=97ad8b078dffa482028ac6d17ebd4d4ae050d0b8ba9c4013dccc312623701713 namespace=k8s.io Aug 13 00:03:39.666337 containerd[1536]: time="2025-08-13T00:03:39.666178997Z" level=warning msg="cleaning up after shim disconnected" id=97ad8b078dffa482028ac6d17ebd4d4ae050d0b8ba9c4013dccc312623701713 namespace=k8s.io Aug 13 00:03:39.666337 containerd[1536]: time="2025-08-13T00:03:39.666197197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:40.222555 kubelet[2622]: E0813 00:03:40.221233 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:40.284383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97ad8b078dffa482028ac6d17ebd4d4ae050d0b8ba9c4013dccc312623701713-rootfs.mount: Deactivated successfully. Aug 13 00:03:40.583509 kubelet[2622]: E0813 00:03:40.583361 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:40.592709 containerd[1536]: time="2025-08-13T00:03:40.592660035Z" level=info msg="CreateContainer within sandbox \"76032c4e33db795932f3d0446b3941331e6c87f8570f946f89aa9af290d65c8c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:03:40.605230 containerd[1536]: time="2025-08-13T00:03:40.605179430Z" level=info msg="CreateContainer within sandbox \"76032c4e33db795932f3d0446b3941331e6c87f8570f946f89aa9af290d65c8c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c1b94b63f16ded45e76ab4a8b5012e719e5c9e4cf53327c20bf957e0df652142\"" Aug 13 00:03:40.605675 containerd[1536]: time="2025-08-13T00:03:40.605645713Z" level=info msg="StartContainer for \"c1b94b63f16ded45e76ab4a8b5012e719e5c9e4cf53327c20bf957e0df652142\"" Aug 13 00:03:40.662941 containerd[1536]: time="2025-08-13T00:03:40.662890456Z" level=info msg="StartContainer for \"c1b94b63f16ded45e76ab4a8b5012e719e5c9e4cf53327c20bf957e0df652142\" returns successfully" Aug 13 00:03:40.676483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1b94b63f16ded45e76ab4a8b5012e719e5c9e4cf53327c20bf957e0df652142-rootfs.mount: Deactivated successfully. Aug 13 00:03:40.679601 containerd[1536]: time="2025-08-13T00:03:40.679544236Z" level=info msg="shim disconnected" id=c1b94b63f16ded45e76ab4a8b5012e719e5c9e4cf53327c20bf957e0df652142 namespace=k8s.io Aug 13 00:03:40.679601 containerd[1536]: time="2025-08-13T00:03:40.679599596Z" level=warning msg="cleaning up after shim disconnected" id=c1b94b63f16ded45e76ab4a8b5012e719e5c9e4cf53327c20bf957e0df652142 namespace=k8s.io Aug 13 00:03:40.679758 containerd[1536]: time="2025-08-13T00:03:40.679610636Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:41.588155 kubelet[2622]: E0813 00:03:41.587120 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:41.594474 containerd[1536]: time="2025-08-13T00:03:41.591148865Z" level=info msg="CreateContainer within sandbox \"76032c4e33db795932f3d0446b3941331e6c87f8570f946f89aa9af290d65c8c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:03:41.613398 containerd[1536]: time="2025-08-13T00:03:41.613335992Z" level=info msg="CreateContainer within sandbox \"76032c4e33db795932f3d0446b3941331e6c87f8570f946f89aa9af290d65c8c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cf1fcbf0c2dd9f487e98e39ffc2768f4befcb7352450a4987b5b19a64f11edc7\"" Aug 13 00:03:41.614801 containerd[1536]: time="2025-08-13T00:03:41.614768481Z" level=info msg="StartContainer for \"cf1fcbf0c2dd9f487e98e39ffc2768f4befcb7352450a4987b5b19a64f11edc7\"" Aug 13 00:03:41.678960 containerd[1536]: time="2025-08-13T00:03:41.678914448Z" level=info msg="StartContainer for \"cf1fcbf0c2dd9f487e98e39ffc2768f4befcb7352450a4987b5b19a64f11edc7\" returns successfully" Aug 13 00:03:41.961718 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Aug 13 00:03:42.223106 kubelet[2622]: E0813 00:03:42.221303 2622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-vjf4t" podUID="aa7b8481-1ac0-48ee-b34a-7cd2671c97a4" Aug 13 00:03:42.593107 kubelet[2622]: E0813 00:03:42.591866 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:43.222071 kubelet[2622]: E0813 00:03:43.221078 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:44.221216 kubelet[2622]: E0813 00:03:44.221177 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:44.353644 kubelet[2622]: E0813 00:03:44.353599 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:44.570558 systemd[1]: run-containerd-runc-k8s.io-cf1fcbf0c2dd9f487e98e39ffc2768f4befcb7352450a4987b5b19a64f11edc7-runc.CB9NzQ.mount: Deactivated successfully. Aug 13 00:03:44.939704 systemd-networkd[1229]: lxc_health: Link UP Aug 13 00:03:44.942213 systemd-networkd[1229]: lxc_health: Gained carrier Aug 13 00:03:46.212601 systemd-networkd[1229]: lxc_health: Gained IPv6LL Aug 13 00:03:46.354470 kubelet[2622]: E0813 00:03:46.354207 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:46.373357 kubelet[2622]: I0813 00:03:46.372533 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rnssd" podStartSLOduration=8.372516685 podStartE2EDuration="8.372516685s" podCreationTimestamp="2025-08-13 00:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:42.611573165 +0000 UTC m=+90.498999218" watchObservedRunningTime="2025-08-13 00:03:46.372516685 +0000 UTC m=+94.259942698" Aug 13 00:03:46.602100 kubelet[2622]: E0813 00:03:46.601985 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:47.221751 kubelet[2622]: E0813 00:03:47.221344 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:47.221751 kubelet[2622]: E0813 00:03:47.221479 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:47.603688 kubelet[2622]: E0813 00:03:47.603544 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:51.017869 sshd[4489]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:51.021602 systemd[1]: sshd@26-10.0.0.48:22-10.0.0.1:55636.service: Deactivated successfully. Aug 13 00:03:51.023613 systemd-logind[1511]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:03:51.023681 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:03:51.026640 systemd-logind[1511]: Removed session 27.