Aug 12 23:52:20.979600 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 12 23:52:20.979623 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 12 23:52:20.979634 kernel: KASLR enabled Aug 12 23:52:20.979639 kernel: efi: EFI v2.7 by EDK II Aug 12 23:52:20.979645 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 12 23:52:20.979650 kernel: random: crng init done Aug 12 23:52:20.979657 kernel: ACPI: Early table checksum verification disabled Aug 12 23:52:20.979663 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 12 23:52:20.979669 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 12 23:52:20.979677 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:52:20.979687 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:52:20.979694 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:52:20.979700 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:52:20.979707 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:52:20.979715 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:52:20.979724 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:52:20.979730 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:52:20.979739 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:52:20.979746 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 12 23:52:20.979752 kernel: NUMA: Failed to initialise from firmware Aug 12 23:52:20.979758 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:52:20.979764 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Aug 12 23:52:20.979771 kernel: Zone ranges: Aug 12 23:52:20.979781 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:52:20.979788 kernel: DMA32 empty Aug 12 23:52:20.979796 kernel: Normal empty Aug 12 23:52:20.979802 kernel: Movable zone start for each node Aug 12 23:52:20.979808 kernel: Early memory node ranges Aug 12 23:52:20.979815 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 12 23:52:20.979821 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 12 23:52:20.979827 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 12 23:52:20.979834 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 12 23:52:20.979840 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 12 23:52:20.979846 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 12 23:52:20.979853 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 12 23:52:20.979859 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:52:20.979872 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 12 23:52:20.979880 kernel: psci: probing for conduit method from ACPI. Aug 12 23:52:20.979886 kernel: psci: PSCIv1.1 detected in firmware. Aug 12 23:52:20.979893 kernel: psci: Using standard PSCI v0.2 function IDs Aug 12 23:52:20.979902 kernel: psci: Trusted OS migration not required Aug 12 23:52:20.979909 kernel: psci: SMC Calling Convention v1.1 Aug 12 23:52:20.979916 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 12 23:52:20.979924 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 12 23:52:20.979931 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 12 23:52:20.979938 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 12 23:52:20.979945 kernel: Detected PIPT I-cache on CPU0 Aug 12 23:52:20.979951 kernel: CPU features: detected: GIC system register CPU interface Aug 12 23:52:20.979958 kernel: CPU features: detected: Hardware dirty bit management Aug 12 23:52:20.979965 kernel: CPU features: detected: Spectre-v4 Aug 12 23:52:20.979971 kernel: CPU features: detected: Spectre-BHB Aug 12 23:52:20.979978 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 12 23:52:20.979985 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 12 23:52:20.979993 kernel: CPU features: detected: ARM erratum 1418040 Aug 12 23:52:20.980000 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 12 23:52:20.980007 kernel: alternatives: applying boot alternatives Aug 12 23:52:20.980088 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 12 23:52:20.980098 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:52:20.980105 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 12 23:52:20.980112 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:52:20.980119 kernel: Fallback order for Node 0: 0 Aug 12 23:52:20.980126 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 12 23:52:20.980132 kernel: Policy zone: DMA Aug 12 23:52:20.980140 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:52:20.980151 kernel: software IO TLB: area num 4. Aug 12 23:52:20.980158 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 12 23:52:20.980165 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Aug 12 23:52:20.980172 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 12 23:52:20.980179 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:52:20.980187 kernel: rcu: RCU event tracing is enabled. Aug 12 23:52:20.980194 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 12 23:52:20.980205 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:52:20.980217 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:52:20.980225 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:52:20.980232 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 12 23:52:20.980241 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 12 23:52:20.980248 kernel: GICv3: 256 SPIs implemented Aug 12 23:52:20.980256 kernel: GICv3: 0 Extended SPIs implemented Aug 12 23:52:20.980268 kernel: Root IRQ handler: gic_handle_irq Aug 12 23:52:20.980278 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 12 23:52:20.980285 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 12 23:52:20.980292 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 12 23:52:20.980299 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Aug 12 23:52:20.980306 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Aug 12 23:52:20.980313 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 12 23:52:20.980320 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 12 23:52:20.980327 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 12 23:52:20.980336 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:52:20.980343 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 12 23:52:20.980350 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 12 23:52:20.980359 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 12 23:52:20.980366 kernel: arm-pv: using stolen time PV Aug 12 23:52:20.980374 kernel: Console: colour dummy device 80x25 Aug 12 23:52:20.980384 kernel: ACPI: Core revision 20230628 Aug 12 23:52:20.980395 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 12 23:52:20.980402 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:52:20.980410 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 12 23:52:20.980423 kernel: landlock: Up and running. Aug 12 23:52:20.980431 kernel: SELinux: Initializing. Aug 12 23:52:20.980438 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:52:20.980446 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:52:20.980456 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:52:20.980466 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:52:20.980473 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:52:20.980487 kernel: rcu: Max phase no-delay instances is 400. Aug 12 23:52:20.980497 kernel: Platform MSI: ITS@0x8080000 domain created Aug 12 23:52:20.980506 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 12 23:52:20.980524 kernel: Remapping and enabling EFI services. Aug 12 23:52:20.980533 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:52:20.980540 kernel: Detected PIPT I-cache on CPU1 Aug 12 23:52:20.980547 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 12 23:52:20.980554 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 12 23:52:20.980561 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:52:20.980568 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 12 23:52:20.980575 kernel: Detected PIPT I-cache on CPU2 Aug 12 23:52:20.980582 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 12 23:52:20.980591 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 12 23:52:20.980598 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:52:20.980610 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 12 23:52:20.980619 kernel: Detected PIPT I-cache on CPU3 Aug 12 23:52:20.980626 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 12 23:52:20.980634 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 12 23:52:20.980642 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:52:20.980649 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 12 23:52:20.980657 kernel: smp: Brought up 1 node, 4 CPUs Aug 12 23:52:20.980667 kernel: SMP: Total of 4 processors activated. Aug 12 23:52:20.980674 kernel: CPU features: detected: 32-bit EL0 Support Aug 12 23:52:20.980685 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 12 23:52:20.980696 kernel: CPU features: detected: Common not Private translations Aug 12 23:52:20.980704 kernel: CPU features: detected: CRC32 instructions Aug 12 23:52:20.980712 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 12 23:52:20.980720 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 12 23:52:20.980727 kernel: CPU features: detected: LSE atomic instructions Aug 12 23:52:20.980745 kernel: CPU features: detected: Privileged Access Never Aug 12 23:52:20.980752 kernel: CPU features: detected: RAS Extension Support Aug 12 23:52:20.980760 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 12 23:52:20.980767 kernel: CPU: All CPU(s) started at EL1 Aug 12 23:52:20.980774 kernel: alternatives: applying system-wide alternatives Aug 12 23:52:20.980782 kernel: devtmpfs: initialized Aug 12 23:52:20.980789 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:52:20.980796 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 12 23:52:20.980808 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:52:20.980817 kernel: SMBIOS 3.0.0 present. Aug 12 23:52:20.980824 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 12 23:52:20.980832 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:52:20.980839 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 12 23:52:20.980846 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 12 23:52:20.980854 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 12 23:52:20.980861 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:52:20.980902 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:52:20.980916 kernel: audit: type=2000 audit(0.026:1): state=initialized audit_enabled=0 res=1 Aug 12 23:52:20.980926 kernel: cpuidle: using governor menu Aug 12 23:52:20.980933 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 12 23:52:20.980941 kernel: ASID allocator initialised with 32768 entries Aug 12 23:52:20.980949 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:52:20.980957 kernel: Serial: AMBA PL011 UART driver Aug 12 23:52:20.980964 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 12 23:52:20.980972 kernel: Modules: 0 pages in range for non-PLT usage Aug 12 23:52:20.980979 kernel: Modules: 509008 pages in range for PLT usage Aug 12 23:52:20.980986 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 12 23:52:20.980996 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 12 23:52:20.981003 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 12 23:52:20.981010 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 12 23:52:20.981017 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:52:20.981025 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 12 23:52:20.981032 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 12 23:52:20.981040 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 12 23:52:20.981047 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:52:20.981055 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:52:20.981063 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:52:20.981071 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:52:20.981078 kernel: ACPI: Interpreter enabled Aug 12 23:52:20.981085 kernel: ACPI: Using GIC for interrupt routing Aug 12 23:52:20.981093 kernel: ACPI: MCFG table detected, 1 entries Aug 12 23:52:20.981100 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 12 23:52:20.981108 kernel: printk: console [ttyAMA0] enabled Aug 12 23:52:20.981116 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:52:20.981282 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:52:20.981367 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 12 23:52:20.981444 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 12 23:52:20.981524 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 12 23:52:20.981607 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 12 23:52:20.981620 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 12 23:52:20.981630 kernel: PCI host bridge to bus 0000:00 Aug 12 23:52:20.981705 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 12 23:52:20.981778 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 12 23:52:20.981842 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 12 23:52:20.981916 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:52:20.982014 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 12 23:52:20.982099 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 12 23:52:20.982171 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 12 23:52:20.982247 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 12 23:52:20.982317 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:52:20.982398 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:52:20.982475 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 12 23:52:20.982559 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 12 23:52:20.982626 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 12 23:52:20.982688 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 12 23:52:20.982759 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 12 23:52:20.982769 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 12 23:52:20.982777 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 12 23:52:20.982785 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 12 23:52:20.982793 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 12 23:52:20.982801 kernel: iommu: Default domain type: Translated Aug 12 23:52:20.982808 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 12 23:52:20.982816 kernel: efivars: Registered efivars operations Aug 12 23:52:20.982825 kernel: vgaarb: loaded Aug 12 23:52:20.982835 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 12 23:52:20.982846 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:52:20.982854 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:52:20.982862 kernel: pnp: PnP ACPI init Aug 12 23:52:20.982961 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 12 23:52:20.982973 kernel: pnp: PnP ACPI: found 1 devices Aug 12 23:52:20.982981 kernel: NET: Registered PF_INET protocol family Aug 12 23:52:20.982989 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 12 23:52:20.983000 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 12 23:52:20.983007 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:52:20.983015 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:52:20.983023 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 12 23:52:20.983031 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 12 23:52:20.983038 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:52:20.983046 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:52:20.983057 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:52:20.983067 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:52:20.983075 kernel: kvm [1]: HYP mode not available Aug 12 23:52:20.983082 kernel: Initialise system trusted keyrings Aug 12 23:52:20.983090 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 12 23:52:20.983097 kernel: Key type asymmetric registered Aug 12 23:52:20.983105 kernel: Asymmetric key parser 'x509' registered Aug 12 23:52:20.983112 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 12 23:52:20.983120 kernel: io scheduler mq-deadline registered Aug 12 23:52:20.983127 kernel: io scheduler kyber registered Aug 12 23:52:20.983137 kernel: io scheduler bfq registered Aug 12 23:52:20.983150 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 12 23:52:20.983157 kernel: ACPI: button: Power Button [PWRB] Aug 12 23:52:20.983165 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 12 23:52:20.983250 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 12 23:52:20.983262 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:52:20.983269 kernel: thunder_xcv, ver 1.0 Aug 12 23:52:20.983277 kernel: thunder_bgx, ver 1.0 Aug 12 23:52:20.983285 kernel: nicpf, ver 1.0 Aug 12 23:52:20.983292 kernel: nicvf, ver 1.0 Aug 12 23:52:20.983374 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 12 23:52:20.983455 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-12T23:52:20 UTC (1755042740) Aug 12 23:52:20.983466 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 12 23:52:20.983474 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 12 23:52:20.983482 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 12 23:52:20.983489 kernel: watchdog: Hard watchdog permanently disabled Aug 12 23:52:20.983497 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:52:20.983504 kernel: Segment Routing with IPv6 Aug 12 23:52:20.983528 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:52:20.983536 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:52:20.983544 kernel: Key type dns_resolver registered Aug 12 23:52:20.983552 kernel: registered taskstats version 1 Aug 12 23:52:20.983560 kernel: Loading compiled-in X.509 certificates Aug 12 23:52:20.983568 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 12 23:52:20.983575 kernel: Key type .fscrypt registered Aug 12 23:52:20.983582 kernel: Key type fscrypt-provisioning registered Aug 12 23:52:20.983590 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:52:20.983599 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:52:20.983607 kernel: ima: No architecture policies found Aug 12 23:52:20.983615 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 12 23:52:20.983622 kernel: clk: Disabling unused clocks Aug 12 23:52:20.983630 kernel: Freeing unused kernel memory: 39424K Aug 12 23:52:20.983637 kernel: Run /init as init process Aug 12 23:52:20.983645 kernel: with arguments: Aug 12 23:52:20.983653 kernel: /init Aug 12 23:52:20.983660 kernel: with environment: Aug 12 23:52:20.983669 kernel: HOME=/ Aug 12 23:52:20.983677 kernel: TERM=linux Aug 12 23:52:20.983684 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:52:20.983694 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 12 23:52:20.983703 systemd[1]: Detected virtualization kvm. Aug 12 23:52:20.983712 systemd[1]: Detected architecture arm64. Aug 12 23:52:20.983719 systemd[1]: Running in initrd. Aug 12 23:52:20.983729 systemd[1]: No hostname configured, using default hostname. Aug 12 23:52:20.983740 systemd[1]: Hostname set to . Aug 12 23:52:20.983750 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:52:20.983758 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:52:20.983767 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:52:20.983775 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:52:20.983785 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 12 23:52:20.983794 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:52:20.983803 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 12 23:52:20.983812 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 12 23:52:20.983821 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 12 23:52:20.983830 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 12 23:52:20.983838 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:52:20.983846 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:52:20.983854 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:52:20.983869 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:52:20.983878 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:52:20.983886 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:52:20.983894 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:52:20.983902 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:52:20.983910 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 12 23:52:20.983919 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 12 23:52:20.983927 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:52:20.983935 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:52:20.983945 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:52:20.983953 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:52:20.983962 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 12 23:52:20.983973 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:52:20.983981 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 12 23:52:20.983989 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:52:20.983997 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:52:20.984005 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:52:20.984015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:52:20.984024 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 12 23:52:20.984032 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:52:20.984040 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:52:20.984068 systemd-journald[238]: Collecting audit messages is disabled. Aug 12 23:52:20.984091 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:52:20.984099 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:52:20.984107 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:52:20.984115 kernel: Bridge firewalling registered Aug 12 23:52:20.984125 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:52:20.984134 systemd-journald[238]: Journal started Aug 12 23:52:20.984154 systemd-journald[238]: Runtime Journal (/run/log/journal/e1d6c4eb54004c06a11cf440393f2aa4) is 5.9M, max 47.3M, 41.4M free. Aug 12 23:52:20.968628 systemd-modules-load[239]: Inserted module 'overlay' Aug 12 23:52:20.983704 systemd-modules-load[239]: Inserted module 'br_netfilter' Aug 12 23:52:20.991691 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:52:20.992885 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:52:20.998090 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:52:21.000432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:52:21.003422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:52:21.007228 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:52:21.018674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:52:21.020102 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:52:21.022730 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:52:21.026813 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:52:21.037708 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 12 23:52:21.040378 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:52:21.054153 dracut-cmdline[277]: dracut-dracut-053 Aug 12 23:52:21.057435 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 12 23:52:21.080884 systemd-resolved[279]: Positive Trust Anchors: Aug 12 23:52:21.080901 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:52:21.080937 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:52:21.088590 systemd-resolved[279]: Defaulting to hostname 'linux'. Aug 12 23:52:21.089975 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:52:21.091759 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:52:21.145558 kernel: SCSI subsystem initialized Aug 12 23:52:21.152541 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:52:21.160551 kernel: iscsi: registered transport (tcp) Aug 12 23:52:21.173823 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:52:21.173874 kernel: QLogic iSCSI HBA Driver Aug 12 23:52:21.225596 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 12 23:52:21.237696 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 12 23:52:21.255920 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:52:21.255983 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:52:21.256003 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 12 23:52:21.307555 kernel: raid6: neonx8 gen() 15499 MB/s Aug 12 23:52:21.324539 kernel: raid6: neonx4 gen() 15392 MB/s Aug 12 23:52:21.341539 kernel: raid6: neonx2 gen() 13016 MB/s Aug 12 23:52:21.358541 kernel: raid6: neonx1 gen() 10272 MB/s Aug 12 23:52:21.375541 kernel: raid6: int64x8 gen() 6846 MB/s Aug 12 23:52:21.392539 kernel: raid6: int64x4 gen() 7192 MB/s Aug 12 23:52:21.409540 kernel: raid6: int64x2 gen() 6016 MB/s Aug 12 23:52:21.426645 kernel: raid6: int64x1 gen() 4990 MB/s Aug 12 23:52:21.426664 kernel: raid6: using algorithm neonx8 gen() 15499 MB/s Aug 12 23:52:21.444664 kernel: raid6: .... xor() 11696 MB/s, rmw enabled Aug 12 23:52:21.444691 kernel: raid6: using neon recovery algorithm Aug 12 23:52:21.450540 kernel: xor: measuring software checksum speed Aug 12 23:52:21.451795 kernel: 8regs : 16776 MB/sec Aug 12 23:52:21.451808 kernel: 32regs : 19617 MB/sec Aug 12 23:52:21.453055 kernel: arm64_neon : 25323 MB/sec Aug 12 23:52:21.453069 kernel: xor: using function: arm64_neon (25323 MB/sec) Aug 12 23:52:21.505542 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 12 23:52:21.516742 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:52:21.525718 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:52:21.538101 systemd-udevd[463]: Using default interface naming scheme 'v255'. Aug 12 23:52:21.541376 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:52:21.547702 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 12 23:52:21.564486 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Aug 12 23:52:21.598602 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:52:21.607712 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:52:21.650640 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:52:21.659793 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 12 23:52:21.678551 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 12 23:52:21.681300 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:52:21.683504 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:52:21.686187 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:52:21.696735 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 12 23:52:21.706569 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 12 23:52:21.711963 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 12 23:52:21.710716 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:52:21.715875 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:52:21.716003 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:52:21.719118 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:52:21.720233 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:52:21.726641 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:52:21.726673 kernel: GPT:9289727 != 19775487 Aug 12 23:52:21.726683 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:52:21.726692 kernel: GPT:9289727 != 19775487 Aug 12 23:52:21.720397 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:52:21.730511 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:52:21.730545 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:52:21.724735 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:52:21.740095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:52:21.754956 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (509) Aug 12 23:52:21.755021 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (514) Aug 12 23:52:21.757369 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 12 23:52:21.759089 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:52:21.765536 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 12 23:52:21.776528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:52:21.781024 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 12 23:52:21.783208 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 12 23:52:21.800736 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 12 23:52:21.802797 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:52:21.823552 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:52:21.901387 disk-uuid[551]: Primary Header is updated. Aug 12 23:52:21.901387 disk-uuid[551]: Secondary Entries is updated. Aug 12 23:52:21.901387 disk-uuid[551]: Secondary Header is updated. Aug 12 23:52:21.910529 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:52:21.914540 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:52:21.918545 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:52:22.920548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:52:22.922916 disk-uuid[560]: The operation has completed successfully. Aug 12 23:52:22.944599 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:52:22.944700 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 12 23:52:22.970705 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 12 23:52:22.975732 sh[575]: Success Aug 12 23:52:22.995544 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 12 23:52:23.036511 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 12 23:52:23.038572 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 12 23:52:23.039604 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 12 23:52:23.052489 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 12 23:52:23.052568 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:52:23.052579 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 12 23:52:23.053731 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 12 23:52:23.054593 kernel: BTRFS info (device dm-0): using free space tree Aug 12 23:52:23.059642 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 12 23:52:23.061114 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 12 23:52:23.061884 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 12 23:52:23.064949 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 12 23:52:23.077408 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:52:23.077468 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:52:23.077487 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:52:23.081544 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:52:23.089797 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 12 23:52:23.091576 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:52:23.099678 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 12 23:52:23.108874 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 12 23:52:23.186105 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:52:23.204745 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:52:23.248783 systemd-networkd[760]: lo: Link UP Aug 12 23:52:23.248798 systemd-networkd[760]: lo: Gained carrier Aug 12 23:52:23.249536 systemd-networkd[760]: Enumeration completed Aug 12 23:52:23.249811 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:52:23.250138 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:52:23.250142 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:52:23.251646 systemd[1]: Reached target network.target - Network. Aug 12 23:52:23.251661 systemd-networkd[760]: eth0: Link UP Aug 12 23:52:23.251664 systemd-networkd[760]: eth0: Gained carrier Aug 12 23:52:23.251673 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:52:23.287590 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:52:23.294924 ignition[674]: Ignition 2.19.0 Aug 12 23:52:23.294934 ignition[674]: Stage: fetch-offline Aug 12 23:52:23.294986 ignition[674]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:52:23.294994 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:52:23.295240 ignition[674]: parsed url from cmdline: "" Aug 12 23:52:23.295243 ignition[674]: no config URL provided Aug 12 23:52:23.295248 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:52:23.295254 ignition[674]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:52:23.295278 ignition[674]: op(1): [started] loading QEMU firmware config module Aug 12 23:52:23.295283 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 12 23:52:23.313560 ignition[674]: op(1): [finished] loading QEMU firmware config module Aug 12 23:52:23.352512 ignition[674]: parsing config with SHA512: ca5baa878b10c731ac6bf1fd2df077dfcf1b7d801d7909356ce016d5152c1609b917409ff73a1d80736085f2a776191f7d42b68976e1b229612db323e42fd55f Aug 12 23:52:23.374014 unknown[674]: fetched base config from "system" Aug 12 23:52:23.374028 unknown[674]: fetched user config from "qemu" Aug 12 23:52:23.374585 ignition[674]: fetch-offline: fetch-offline passed Aug 12 23:52:23.374658 ignition[674]: Ignition finished successfully Aug 12 23:52:23.377216 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:52:23.379582 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 12 23:52:23.391763 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 12 23:52:23.408465 ignition[771]: Ignition 2.19.0 Aug 12 23:52:23.408476 ignition[771]: Stage: kargs Aug 12 23:52:23.408685 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:52:23.408696 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:52:23.413008 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 12 23:52:23.409704 ignition[771]: kargs: kargs passed Aug 12 23:52:23.409762 ignition[771]: Ignition finished successfully Aug 12 23:52:23.425763 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 12 23:52:23.438379 ignition[779]: Ignition 2.19.0 Aug 12 23:52:23.438394 ignition[779]: Stage: disks Aug 12 23:52:23.441401 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 12 23:52:23.438599 ignition[779]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:52:23.442941 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 12 23:52:23.438609 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:52:23.444176 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 12 23:52:23.439578 ignition[779]: disks: disks passed Aug 12 23:52:23.445938 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:52:23.439629 ignition[779]: Ignition finished successfully Aug 12 23:52:23.447882 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:52:23.449091 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:52:23.463729 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 12 23:52:23.474145 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 12 23:52:23.481195 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 12 23:52:23.491704 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 12 23:52:23.546560 kernel: EXT4-fs (vda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 12 23:52:23.546562 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 12 23:52:23.547878 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 12 23:52:23.565632 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:52:23.568977 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 12 23:52:23.570439 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 12 23:52:23.570573 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:52:23.578658 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (797) Aug 12 23:52:23.578693 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:52:23.570645 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:52:23.584837 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:52:23.584870 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:52:23.578207 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 12 23:52:23.583918 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 12 23:52:23.590540 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:52:23.592342 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:52:23.652934 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:52:23.657374 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:52:23.662243 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:52:23.669346 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:52:23.759234 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 12 23:52:23.770679 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 12 23:52:23.773363 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 12 23:52:23.779543 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:52:23.800473 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 12 23:52:23.806397 ignition[910]: INFO : Ignition 2.19.0 Aug 12 23:52:23.806397 ignition[910]: INFO : Stage: mount Aug 12 23:52:23.809137 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:52:23.809137 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:52:23.809137 ignition[910]: INFO : mount: mount passed Aug 12 23:52:23.809137 ignition[910]: INFO : Ignition finished successfully Aug 12 23:52:23.809957 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 12 23:52:23.818665 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 12 23:52:24.051099 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 12 23:52:24.058759 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:52:24.066547 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (923) Aug 12 23:52:24.069287 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:52:24.069350 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:52:24.069372 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:52:24.074531 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:52:24.075490 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:52:24.102251 ignition[940]: INFO : Ignition 2.19.0 Aug 12 23:52:24.102251 ignition[940]: INFO : Stage: files Aug 12 23:52:24.104032 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:52:24.104032 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:52:24.106526 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:52:24.106526 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:52:24.106526 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:52:24.111584 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:52:24.113163 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:52:24.113163 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:52:24.112223 unknown[940]: wrote ssh authorized keys file for user: core Aug 12 23:52:24.117280 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Aug 12 23:52:24.117280 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Aug 12 23:52:24.177113 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 12 23:52:24.323660 systemd-networkd[760]: eth0: Gained IPv6LL Aug 12 23:52:24.975293 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Aug 12 23:52:24.975293 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:52:24.979458 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 12 23:52:25.182124 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 12 23:52:25.298586 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:52:25.298586 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:52:25.303442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:52:25.303442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:52:25.303442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:52:25.303442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:52:25.303442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:52:25.303442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:52:25.303442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:52:25.303442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:52:25.303442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:52:25.303442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 12 23:52:25.303442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 12 23:52:25.303442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 12 23:52:25.303442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Aug 12 23:52:25.596174 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 12 23:52:26.052178 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 12 23:52:26.052178 ignition[940]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 12 23:52:26.055690 ignition[940]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:52:26.055690 ignition[940]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:52:26.055690 ignition[940]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 12 23:52:26.055690 ignition[940]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 12 23:52:26.055690 ignition[940]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:52:26.055690 ignition[940]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:52:26.055690 ignition[940]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 12 23:52:26.055690 ignition[940]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 12 23:52:26.087335 ignition[940]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:52:26.091552 ignition[940]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:52:26.093432 ignition[940]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 12 23:52:26.093432 ignition[940]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:52:26.093432 ignition[940]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:52:26.093432 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:52:26.093432 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:52:26.093432 ignition[940]: INFO : files: files passed Aug 12 23:52:26.093432 ignition[940]: INFO : Ignition finished successfully Aug 12 23:52:26.093566 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 12 23:52:26.108713 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 12 23:52:26.112134 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 12 23:52:26.114769 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:52:26.114880 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 12 23:52:26.119969 initrd-setup-root-after-ignition[968]: grep: /sysroot/oem/oem-release: No such file or directory Aug 12 23:52:26.123836 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:52:26.123836 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:52:26.127980 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:52:26.130559 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:52:26.132085 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 12 23:52:26.142795 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 12 23:52:26.169689 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:52:26.170799 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 12 23:52:26.172245 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 12 23:52:26.174556 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 12 23:52:26.176499 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 12 23:52:26.177422 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 12 23:52:26.200135 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:52:26.203176 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 12 23:52:26.217743 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:52:26.219131 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:52:26.221408 systemd[1]: Stopped target timers.target - Timer Units. Aug 12 23:52:26.223674 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:52:26.223812 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:52:26.227571 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 12 23:52:26.229698 systemd[1]: Stopped target basic.target - Basic System. Aug 12 23:52:26.231476 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 12 23:52:26.233572 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:52:26.236221 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 12 23:52:26.238713 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 12 23:52:26.240956 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:52:26.243348 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 12 23:52:26.245774 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 12 23:52:26.247754 systemd[1]: Stopped target swap.target - Swaps. Aug 12 23:52:26.249741 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:52:26.249888 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:52:26.253607 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:52:26.255761 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:52:26.257856 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 12 23:52:26.261603 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:52:26.263011 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:52:26.263149 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 12 23:52:26.266202 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:52:26.266339 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:52:26.268661 systemd[1]: Stopped target paths.target - Path Units. Aug 12 23:52:26.270328 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:52:26.273602 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:52:26.274983 systemd[1]: Stopped target slices.target - Slice Units. Aug 12 23:52:26.277191 systemd[1]: Stopped target sockets.target - Socket Units. Aug 12 23:52:26.278898 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:52:26.279000 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:52:26.280611 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:52:26.280694 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:52:26.282448 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:52:26.282582 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:52:26.284582 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:52:26.284694 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 12 23:52:26.300758 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 12 23:52:26.301688 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:52:26.301829 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:52:26.305976 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 12 23:52:26.308059 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:52:26.309212 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:52:26.311947 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:52:26.313118 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:52:26.316583 ignition[994]: INFO : Ignition 2.19.0 Aug 12 23:52:26.316583 ignition[994]: INFO : Stage: umount Aug 12 23:52:26.316583 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:52:26.316583 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:52:26.316583 ignition[994]: INFO : umount: umount passed Aug 12 23:52:26.316583 ignition[994]: INFO : Ignition finished successfully Aug 12 23:52:26.318178 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:52:26.318278 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 12 23:52:26.323621 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:52:26.325599 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:52:26.325724 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 12 23:52:26.328162 systemd[1]: Stopped target network.target - Network. Aug 12 23:52:26.329475 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:52:26.329565 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 12 23:52:26.330779 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:52:26.330835 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 12 23:52:26.332650 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:52:26.332698 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 12 23:52:26.334583 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 12 23:52:26.334638 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 12 23:52:26.338495 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 12 23:52:26.341614 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 12 23:52:26.348564 systemd-networkd[760]: eth0: DHCPv6 lease lost Aug 12 23:52:26.349924 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:52:26.350050 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 12 23:52:26.352791 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:52:26.352908 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 12 23:52:26.355061 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:52:26.355121 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:52:26.366656 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 12 23:52:26.367760 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:52:26.367835 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:52:26.370115 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:52:26.370170 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:52:26.372126 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:52:26.372177 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 12 23:52:26.375164 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 12 23:52:26.375217 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:52:26.377552 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:52:26.388917 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:52:26.389047 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 12 23:52:26.392189 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:52:26.392293 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 12 23:52:26.394176 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:52:26.394259 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 12 23:52:26.398372 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:52:26.398542 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:52:26.400305 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:52:26.400353 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 12 23:52:26.402067 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:52:26.402109 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:52:26.404279 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:52:26.404331 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:52:26.408508 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:52:26.408587 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 12 23:52:26.411990 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:52:26.412044 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:52:26.424688 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 12 23:52:26.425775 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:52:26.425855 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:52:26.428157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:52:26.428207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:52:26.432512 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:52:26.432636 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 12 23:52:26.434168 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 12 23:52:26.437021 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 12 23:52:26.447630 systemd[1]: Switching root. Aug 12 23:52:26.472388 systemd-journald[238]: Journal stopped Aug 12 23:52:27.402743 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Aug 12 23:52:27.402813 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:52:27.402826 kernel: SELinux: policy capability open_perms=1 Aug 12 23:52:27.402836 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:52:27.402856 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:52:27.402868 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:52:27.402878 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:52:27.402887 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:52:27.402897 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:52:27.402910 kernel: audit: type=1403 audit(1755042746.659:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:52:27.402924 systemd[1]: Successfully loaded SELinux policy in 32.513ms. Aug 12 23:52:27.402945 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.077ms. Aug 12 23:52:27.402957 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 12 23:52:27.402968 systemd[1]: Detected virtualization kvm. Aug 12 23:52:27.402979 systemd[1]: Detected architecture arm64. Aug 12 23:52:27.402990 systemd[1]: Detected first boot. Aug 12 23:52:27.403000 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:52:27.403011 zram_generator::config[1037]: No configuration found. Aug 12 23:52:27.403024 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:52:27.403035 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:52:27.403045 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 12 23:52:27.403057 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:52:27.403068 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 12 23:52:27.403080 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 12 23:52:27.403090 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 12 23:52:27.403101 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 12 23:52:27.403113 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 12 23:52:27.403125 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 12 23:52:27.403136 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 12 23:52:27.403146 systemd[1]: Created slice user.slice - User and Session Slice. Aug 12 23:52:27.403157 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:52:27.403169 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:52:27.403180 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 12 23:52:27.403190 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 12 23:52:27.403201 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 12 23:52:27.403215 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:52:27.403225 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 12 23:52:27.403236 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:52:27.403247 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 12 23:52:27.403261 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 12 23:52:27.403272 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 12 23:52:27.403283 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 12 23:52:27.403295 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:52:27.403306 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:52:27.403317 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:52:27.403328 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:52:27.403339 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 12 23:52:27.403349 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 12 23:52:27.403360 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:52:27.403371 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:52:27.403381 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:52:27.403393 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 12 23:52:27.403406 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 12 23:52:27.403417 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 12 23:52:27.403427 systemd[1]: Mounting media.mount - External Media Directory... Aug 12 23:52:27.403438 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 12 23:52:27.403549 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 12 23:52:27.403565 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 12 23:52:27.403576 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:52:27.403587 systemd[1]: Reached target machines.target - Containers. Aug 12 23:52:27.403602 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 12 23:52:27.403613 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:52:27.403623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:52:27.403634 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 12 23:52:27.403646 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:52:27.403656 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:52:27.403667 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:52:27.403677 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 12 23:52:27.403688 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:52:27.403701 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:52:27.403711 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:52:27.403722 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 12 23:52:27.403732 kernel: fuse: init (API version 7.39) Aug 12 23:52:27.403745 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:52:27.403755 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:52:27.403766 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:52:27.403776 kernel: ACPI: bus type drm_connector registered Aug 12 23:52:27.403786 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:52:27.403799 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:52:27.403810 kernel: loop: module loaded Aug 12 23:52:27.403820 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 12 23:52:27.403831 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:52:27.403842 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:52:27.403859 systemd[1]: Stopped verity-setup.service. Aug 12 23:52:27.403872 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 12 23:52:27.403913 systemd-journald[1104]: Collecting audit messages is disabled. Aug 12 23:52:27.403938 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 12 23:52:27.403950 systemd-journald[1104]: Journal started Aug 12 23:52:27.403973 systemd-journald[1104]: Runtime Journal (/run/log/journal/e1d6c4eb54004c06a11cf440393f2aa4) is 5.9M, max 47.3M, 41.4M free. Aug 12 23:52:27.121918 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:52:27.157067 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 12 23:52:27.157459 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:52:27.406426 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:52:27.407201 systemd[1]: Mounted media.mount - External Media Directory. Aug 12 23:52:27.408582 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 12 23:52:27.409969 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 12 23:52:27.411331 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 12 23:52:27.414559 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 12 23:52:27.416164 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:52:27.417781 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:52:27.417952 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 12 23:52:27.420342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:52:27.420493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:52:27.421989 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:52:27.422151 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:52:27.423581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:52:27.423719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:52:27.425251 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:52:27.426612 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 12 23:52:27.428070 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:52:27.428222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:52:27.429747 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:52:27.431209 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:52:27.432946 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 12 23:52:27.449009 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:52:27.461643 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 12 23:52:27.464130 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 12 23:52:27.465384 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:52:27.465426 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:52:27.467784 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 12 23:52:27.470348 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 12 23:52:27.472708 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 12 23:52:27.473884 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:52:27.475354 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 12 23:52:27.479714 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 12 23:52:27.481088 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:52:27.482234 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 12 23:52:27.483502 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:52:27.486729 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:52:27.491724 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 12 23:52:27.493419 systemd-journald[1104]: Time spent on flushing to /var/log/journal/e1d6c4eb54004c06a11cf440393f2aa4 is 17.771ms for 858 entries. Aug 12 23:52:27.493419 systemd-journald[1104]: System Journal (/var/log/journal/e1d6c4eb54004c06a11cf440393f2aa4) is 8.0M, max 195.6M, 187.6M free. Aug 12 23:52:27.518688 systemd-journald[1104]: Received client request to flush runtime journal. Aug 12 23:52:27.495446 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 12 23:52:27.498938 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:52:27.503983 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 12 23:52:27.505711 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 12 23:52:27.507364 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 12 23:52:27.509030 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 12 23:52:27.518712 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 12 23:52:27.523552 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 12 23:52:27.530637 kernel: loop0: detected capacity change from 0 to 114328 Aug 12 23:52:27.528533 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 12 23:52:27.534598 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 12 23:52:27.537611 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:52:27.553650 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:52:27.559067 udevadm[1161]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 12 23:52:27.570288 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:52:27.572594 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 12 23:52:27.578786 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 12 23:52:27.581663 kernel: loop1: detected capacity change from 0 to 114432 Aug 12 23:52:27.589834 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:52:27.607586 kernel: loop2: detected capacity change from 0 to 211168 Aug 12 23:52:27.615279 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Aug 12 23:52:27.615305 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Aug 12 23:52:27.620161 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:52:27.634548 kernel: loop3: detected capacity change from 0 to 114328 Aug 12 23:52:27.641540 kernel: loop4: detected capacity change from 0 to 114432 Aug 12 23:52:27.649542 kernel: loop5: detected capacity change from 0 to 211168 Aug 12 23:52:27.654007 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 12 23:52:27.654413 (sd-merge)[1174]: Merged extensions into '/usr'. Aug 12 23:52:27.658903 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Aug 12 23:52:27.659067 systemd[1]: Reloading... Aug 12 23:52:27.709731 zram_generator::config[1197]: No configuration found. Aug 12 23:52:27.821561 ldconfig[1143]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:52:27.839277 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:52:27.885138 systemd[1]: Reloading finished in 225 ms. Aug 12 23:52:27.916566 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 12 23:52:27.918197 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 12 23:52:27.936787 systemd[1]: Starting ensure-sysext.service... Aug 12 23:52:27.939043 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:52:27.951645 systemd[1]: Reloading requested from client PID 1235 ('systemctl') (unit ensure-sysext.service)... Aug 12 23:52:27.951664 systemd[1]: Reloading... Aug 12 23:52:27.961042 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:52:27.961310 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 12 23:52:27.962046 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:52:27.962278 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Aug 12 23:52:27.962338 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Aug 12 23:52:27.964902 systemd-tmpfiles[1236]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:52:27.964915 systemd-tmpfiles[1236]: Skipping /boot Aug 12 23:52:27.972441 systemd-tmpfiles[1236]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:52:27.972454 systemd-tmpfiles[1236]: Skipping /boot Aug 12 23:52:28.004055 zram_generator::config[1264]: No configuration found. Aug 12 23:52:28.107197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:52:28.154142 systemd[1]: Reloading finished in 202 ms. Aug 12 23:52:28.170088 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 12 23:52:28.184990 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:52:28.193762 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 12 23:52:28.196828 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 12 23:52:28.199607 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 12 23:52:28.205897 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:52:28.212902 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:52:28.217852 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 12 23:52:28.226534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:52:28.228634 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:52:28.235567 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:52:28.238631 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:52:28.240441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:52:28.244857 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 12 23:52:28.247003 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 12 23:52:28.250120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:52:28.250972 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:52:28.253230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:52:28.253404 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:52:28.256381 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:52:28.257541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:52:28.260169 systemd-udevd[1305]: Using default interface naming scheme 'v255'. Aug 12 23:52:28.268013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:52:28.279912 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:52:28.286762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:52:28.292760 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:52:28.300781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:52:28.300957 augenrules[1336]: No rules Aug 12 23:52:28.302157 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:52:28.304669 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 12 23:52:28.306247 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 12 23:52:28.307827 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:52:28.309744 systemd[1]: Finished ensure-sysext.service. Aug 12 23:52:28.311697 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 12 23:52:28.315015 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 12 23:52:28.316910 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 12 23:52:28.318691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:52:28.318840 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:52:28.320636 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:52:28.320772 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:52:28.322310 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:52:28.322986 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:52:28.324612 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:52:28.324761 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:52:28.326479 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 12 23:52:28.347888 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:52:28.349025 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:52:28.349172 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:52:28.353762 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 12 23:52:28.354977 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:52:28.371598 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 12 23:52:28.395550 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1348) Aug 12 23:52:28.404939 systemd-resolved[1303]: Positive Trust Anchors: Aug 12 23:52:28.404958 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:52:28.404991 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:52:28.423213 systemd-resolved[1303]: Defaulting to hostname 'linux'. Aug 12 23:52:28.425612 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:52:28.426928 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:52:28.438061 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:52:28.455771 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 12 23:52:28.472899 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 12 23:52:28.474405 systemd[1]: Reached target time-set.target - System Time Set. Aug 12 23:52:28.477173 systemd-networkd[1372]: lo: Link UP Aug 12 23:52:28.477180 systemd-networkd[1372]: lo: Gained carrier Aug 12 23:52:28.478004 systemd-networkd[1372]: Enumeration completed Aug 12 23:52:28.478107 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:52:28.478786 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:52:28.478794 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:52:28.479392 systemd[1]: Reached target network.target - Network. Aug 12 23:52:28.479716 systemd-networkd[1372]: eth0: Link UP Aug 12 23:52:28.479720 systemd-networkd[1372]: eth0: Gained carrier Aug 12 23:52:28.479734 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:52:28.498778 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 12 23:52:28.501449 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 12 23:52:28.503659 systemd-networkd[1372]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:52:28.505609 systemd-timesyncd[1375]: Network configuration changed, trying to establish connection. Aug 12 23:52:28.509477 systemd-timesyncd[1375]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 12 23:52:28.509837 systemd-timesyncd[1375]: Initial clock synchronization to Tue 2025-08-12 23:52:28.255113 UTC. Aug 12 23:52:28.511403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:52:28.521829 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 12 23:52:28.525896 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 12 23:52:28.547567 lvm[1392]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:52:28.566875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:52:28.585310 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 12 23:52:28.587085 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:52:28.588240 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:52:28.589482 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 12 23:52:28.590733 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 12 23:52:28.592135 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 12 23:52:28.593351 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 12 23:52:28.594596 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 12 23:52:28.595937 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:52:28.595990 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:52:28.596875 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:52:28.599208 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 12 23:52:28.601776 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 12 23:52:28.610699 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 12 23:52:28.613399 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 12 23:52:28.615074 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 12 23:52:28.616321 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:52:28.617318 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:52:28.618334 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:52:28.618374 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:52:28.619434 systemd[1]: Starting containerd.service - containerd container runtime... Aug 12 23:52:28.624539 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:52:28.621626 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 12 23:52:28.625659 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 12 23:52:28.628006 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 12 23:52:28.631700 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 12 23:52:28.633211 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 12 23:52:28.637680 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 12 23:52:28.648773 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 12 23:52:28.654239 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 12 23:52:28.658955 jq[1404]: false Aug 12 23:52:28.659132 extend-filesystems[1405]: Found loop3 Aug 12 23:52:28.659132 extend-filesystems[1405]: Found loop4 Aug 12 23:52:28.659132 extend-filesystems[1405]: Found loop5 Aug 12 23:52:28.659132 extend-filesystems[1405]: Found vda Aug 12 23:52:28.659132 extend-filesystems[1405]: Found vda1 Aug 12 23:52:28.659132 extend-filesystems[1405]: Found vda2 Aug 12 23:52:28.659132 extend-filesystems[1405]: Found vda3 Aug 12 23:52:28.659132 extend-filesystems[1405]: Found usr Aug 12 23:52:28.659132 extend-filesystems[1405]: Found vda4 Aug 12 23:52:28.659132 extend-filesystems[1405]: Found vda6 Aug 12 23:52:28.659132 extend-filesystems[1405]: Found vda7 Aug 12 23:52:28.659132 extend-filesystems[1405]: Found vda9 Aug 12 23:52:28.659132 extend-filesystems[1405]: Checking size of /dev/vda9 Aug 12 23:52:28.660145 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 12 23:52:28.683186 extend-filesystems[1405]: Resized partition /dev/vda9 Aug 12 23:52:28.700906 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1357) Aug 12 23:52:28.700932 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 12 23:52:28.672347 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:52:28.701310 extend-filesystems[1426]: resize2fs 1.47.1 (20-May-2024) Aug 12 23:52:28.672883 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:52:28.676097 systemd[1]: Starting update-engine.service - Update Engine... Aug 12 23:52:28.712978 jq[1425]: true Aug 12 23:52:28.678056 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 12 23:52:28.682310 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 12 23:52:28.686445 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:52:28.686638 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 12 23:52:28.687754 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:52:28.687925 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 12 23:52:28.694297 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:52:28.694621 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 12 23:52:28.714115 dbus-daemon[1403]: [system] SELinux support is enabled Aug 12 23:52:28.715239 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 12 23:52:28.731914 (ntainerd)[1436]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 12 23:52:28.734107 tar[1428]: linux-arm64/LICENSE Aug 12 23:52:28.735654 tar[1428]: linux-arm64/helm Aug 12 23:52:28.738600 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:52:28.738636 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 12 23:52:28.740073 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:52:28.740101 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 12 23:52:28.741458 systemd-logind[1413]: Watching system buttons on /dev/input/event0 (Power Button) Aug 12 23:52:28.742391 systemd-logind[1413]: New seat seat0. Aug 12 23:52:28.743381 jq[1433]: true Aug 12 23:52:28.744362 systemd[1]: Started systemd-logind.service - User Login Management. Aug 12 23:52:28.757654 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 12 23:52:28.763838 systemd[1]: Started update-engine.service - Update Engine. Aug 12 23:52:28.769826 update_engine[1421]: I20250812 23:52:28.759332 1421 main.cc:92] Flatcar Update Engine starting Aug 12 23:52:28.769826 update_engine[1421]: I20250812 23:52:28.763883 1421 update_check_scheduler.cc:74] Next update check in 7m56s Aug 12 23:52:28.768202 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 12 23:52:28.771476 extend-filesystems[1426]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 12 23:52:28.771476 extend-filesystems[1426]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 12 23:52:28.771476 extend-filesystems[1426]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 12 23:52:28.781139 extend-filesystems[1405]: Resized filesystem in /dev/vda9 Aug 12 23:52:28.775458 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:52:28.777572 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 12 23:52:28.825938 bash[1460]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:52:28.828394 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 12 23:52:28.830464 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 12 23:52:28.921759 locksmithd[1448]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:52:29.029974 containerd[1436]: time="2025-08-12T23:52:29.029883741Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 12 23:52:29.065358 containerd[1436]: time="2025-08-12T23:52:29.065302554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:29.066921 containerd[1436]: time="2025-08-12T23:52:29.066865881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:52:29.066921 containerd[1436]: time="2025-08-12T23:52:29.066902751Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 12 23:52:29.066921 containerd[1436]: time="2025-08-12T23:52:29.066920257Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 12 23:52:29.067102 containerd[1436]: time="2025-08-12T23:52:29.067083305Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 12 23:52:29.067188 containerd[1436]: time="2025-08-12T23:52:29.067105574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:29.067188 containerd[1436]: time="2025-08-12T23:52:29.067164132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:52:29.067188 containerd[1436]: time="2025-08-12T23:52:29.067176486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:29.067346 containerd[1436]: time="2025-08-12T23:52:29.067327064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:52:29.067416 containerd[1436]: time="2025-08-12T23:52:29.067346080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:29.067416 containerd[1436]: time="2025-08-12T23:52:29.067359751Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:52:29.067416 containerd[1436]: time="2025-08-12T23:52:29.067369278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:29.067470 containerd[1436]: time="2025-08-12T23:52:29.067435931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:29.067705 containerd[1436]: time="2025-08-12T23:52:29.067671170Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:29.067814 containerd[1436]: time="2025-08-12T23:52:29.067795063Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:52:29.067838 containerd[1436]: time="2025-08-12T23:52:29.067815163Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 12 23:52:29.067934 containerd[1436]: time="2025-08-12T23:52:29.067893240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 12 23:52:29.067961 containerd[1436]: time="2025-08-12T23:52:29.067940838Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:52:29.077953 containerd[1436]: time="2025-08-12T23:52:29.077881400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 12 23:52:29.078056 containerd[1436]: time="2025-08-12T23:52:29.078019352Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 12 23:52:29.078056 containerd[1436]: time="2025-08-12T23:52:29.078039995Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 12 23:52:29.078093 containerd[1436]: time="2025-08-12T23:52:29.078057384Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 12 23:52:29.078093 containerd[1436]: time="2025-08-12T23:52:29.078071985Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 12 23:52:29.078297 containerd[1436]: time="2025-08-12T23:52:29.078269541Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 12 23:52:29.078637 containerd[1436]: time="2025-08-12T23:52:29.078618642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 12 23:52:29.078806 containerd[1436]: time="2025-08-12T23:52:29.078755084Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 12 23:52:29.078806 containerd[1436]: time="2025-08-12T23:52:29.078776811Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 12 23:52:29.078806 containerd[1436]: time="2025-08-12T23:52:29.078790327Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 12 23:52:29.078806 containerd[1436]: time="2025-08-12T23:52:29.078805625Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 12 23:52:29.078881 containerd[1436]: time="2025-08-12T23:52:29.078818986Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 12 23:52:29.078881 containerd[1436]: time="2025-08-12T23:52:29.078839086Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 12 23:52:29.078881 containerd[1436]: time="2025-08-12T23:52:29.078853029Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 12 23:52:29.078881 containerd[1436]: time="2025-08-12T23:52:29.078871270Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 12 23:52:29.078958 containerd[1436]: time="2025-08-12T23:52:29.078884632Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 12 23:52:29.078958 containerd[1436]: time="2025-08-12T23:52:29.078903144Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 12 23:52:29.078958 containerd[1436]: time="2025-08-12T23:52:29.078940014Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 12 23:52:29.079016 containerd[1436]: time="2025-08-12T23:52:29.078960811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079016 containerd[1436]: time="2025-08-12T23:52:29.078982538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079016 containerd[1436]: time="2025-08-12T23:52:29.079006201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079074 containerd[1436]: time="2025-08-12T23:52:29.079020841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079074 containerd[1436]: time="2025-08-12T23:52:29.079033737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079074 containerd[1436]: time="2025-08-12T23:52:29.079054961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079074 containerd[1436]: time="2025-08-12T23:52:29.079066928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079145 containerd[1436]: time="2025-08-12T23:52:29.079079011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079145 containerd[1436]: time="2025-08-12T23:52:29.079093883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079145 containerd[1436]: time="2025-08-12T23:52:29.079108368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079145 containerd[1436]: time="2025-08-12T23:52:29.079127151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079145 containerd[1436]: time="2025-08-12T23:52:29.079138460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079237 containerd[1436]: time="2025-08-12T23:52:29.079151357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079237 containerd[1436]: time="2025-08-12T23:52:29.079169559Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 12 23:52:29.079237 containerd[1436]: time="2025-08-12T23:52:29.079198296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079237 containerd[1436]: time="2025-08-12T23:52:29.079212974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.079237 containerd[1436]: time="2025-08-12T23:52:29.079227498Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 12 23:52:29.080246 containerd[1436]: time="2025-08-12T23:52:29.080191264Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 12 23:52:29.080289 containerd[1436]: time="2025-08-12T23:52:29.080253153Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 12 23:52:29.080289 containerd[1436]: time="2025-08-12T23:52:29.080267018Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 12 23:52:29.080327 containerd[1436]: time="2025-08-12T23:52:29.080291029Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 12 23:52:29.080327 containerd[1436]: time="2025-08-12T23:52:29.080302571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.080327 containerd[1436]: time="2025-08-12T23:52:29.080316707Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 12 23:52:29.080960 containerd[1436]: time="2025-08-12T23:52:29.080326776Z" level=info msg="NRI interface is disabled by configuration." Aug 12 23:52:29.080960 containerd[1436]: time="2025-08-12T23:52:29.080338279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 12 23:52:29.081024 containerd[1436]: time="2025-08-12T23:52:29.080911000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 12 23:52:29.081024 containerd[1436]: time="2025-08-12T23:52:29.080979008Z" level=info msg="Connect containerd service" Aug 12 23:52:29.081024 containerd[1436]: time="2025-08-12T23:52:29.081021184Z" level=info msg="using legacy CRI server" Aug 12 23:52:29.081160 containerd[1436]: time="2025-08-12T23:52:29.081029665Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 12 23:52:29.081324 containerd[1436]: time="2025-08-12T23:52:29.081232178Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 12 23:52:29.082372 containerd[1436]: time="2025-08-12T23:52:29.082335794Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:52:29.083029 containerd[1436]: time="2025-08-12T23:52:29.082873854Z" level=info msg="Start subscribing containerd event" Aug 12 23:52:29.083338 containerd[1436]: time="2025-08-12T23:52:29.083314704Z" level=info msg="Start recovering state" Aug 12 23:52:29.083479 containerd[1436]: time="2025-08-12T23:52:29.083462958Z" level=info msg="Start event monitor" Aug 12 23:52:29.083626 containerd[1436]: time="2025-08-12T23:52:29.083602381Z" level=info msg="Start snapshots syncer" Aug 12 23:52:29.083700 containerd[1436]: time="2025-08-12T23:52:29.083680885Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:52:29.083765 containerd[1436]: time="2025-08-12T23:52:29.083753463Z" level=info msg="Start streaming server" Aug 12 23:52:29.084195 containerd[1436]: time="2025-08-12T23:52:29.083886109Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:52:29.084195 containerd[1436]: time="2025-08-12T23:52:29.083975340Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:52:29.084380 containerd[1436]: time="2025-08-12T23:52:29.084355231Z" level=info msg="containerd successfully booted in 0.057029s" Aug 12 23:52:29.084465 systemd[1]: Started containerd.service - containerd container runtime. Aug 12 23:52:29.145555 tar[1428]: linux-arm64/README.md Aug 12 23:52:29.155760 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 12 23:52:29.490598 sshd_keygen[1423]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:52:29.510309 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 12 23:52:29.522846 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 12 23:52:29.530563 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:52:29.530821 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 12 23:52:29.533677 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 12 23:52:29.546750 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 12 23:52:29.555934 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 12 23:52:29.558319 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 12 23:52:29.559689 systemd[1]: Reached target getty.target - Login Prompts. Aug 12 23:52:30.467689 systemd-networkd[1372]: eth0: Gained IPv6LL Aug 12 23:52:30.470023 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 12 23:52:30.472056 systemd[1]: Reached target network-online.target - Network is Online. Aug 12 23:52:30.479900 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 12 23:52:30.482713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:30.485127 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 12 23:52:30.503422 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 12 23:52:30.503632 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 12 23:52:30.507216 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 12 23:52:30.509331 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 12 23:52:31.175567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:31.177356 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 12 23:52:31.179002 systemd[1]: Startup finished in 695ms (kernel) + 5.926s (initrd) + 4.552s (userspace) = 11.174s. Aug 12 23:52:31.180527 (kubelet)[1516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:52:31.655510 kubelet[1516]: E0812 23:52:31.655392 1516 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:52:31.657932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:52:31.658100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:52:34.014583 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 12 23:52:34.015821 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:36012.service - OpenSSH per-connection server daemon (10.0.0.1:36012). Aug 12 23:52:34.084602 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 36012 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:34.126843 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:34.140430 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 12 23:52:34.148872 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 12 23:52:34.151311 systemd-logind[1413]: New session 1 of user core. Aug 12 23:52:34.160065 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 12 23:52:34.168072 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 12 23:52:34.179397 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:52:34.277479 systemd[1534]: Queued start job for default target default.target. Aug 12 23:52:34.292561 systemd[1534]: Created slice app.slice - User Application Slice. Aug 12 23:52:34.292590 systemd[1534]: Reached target paths.target - Paths. Aug 12 23:52:34.292603 systemd[1534]: Reached target timers.target - Timers. Aug 12 23:52:34.293956 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 12 23:52:34.305722 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 12 23:52:34.305842 systemd[1534]: Reached target sockets.target - Sockets. Aug 12 23:52:34.305856 systemd[1534]: Reached target basic.target - Basic System. Aug 12 23:52:34.305897 systemd[1534]: Reached target default.target - Main User Target. Aug 12 23:52:34.305925 systemd[1534]: Startup finished in 118ms. Aug 12 23:52:34.306215 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 12 23:52:34.308770 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 12 23:52:34.375476 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:36018.service - OpenSSH per-connection server daemon (10.0.0.1:36018). Aug 12 23:52:34.416835 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 36018 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:34.419006 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:34.423392 systemd-logind[1413]: New session 2 of user core. Aug 12 23:52:34.434875 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 12 23:52:34.488826 sshd[1545]: pam_unix(sshd:session): session closed for user core Aug 12 23:52:34.502472 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:36018.service: Deactivated successfully. Aug 12 23:52:34.504022 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:52:34.506379 systemd-logind[1413]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:52:34.509104 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:36026.service - OpenSSH per-connection server daemon (10.0.0.1:36026). Aug 12 23:52:34.511224 systemd-logind[1413]: Removed session 2. Aug 12 23:52:34.553307 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 36026 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:34.555255 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:34.561759 systemd-logind[1413]: New session 3 of user core. Aug 12 23:52:34.573114 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 12 23:52:34.624278 sshd[1552]: pam_unix(sshd:session): session closed for user core Aug 12 23:52:34.634560 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:36026.service: Deactivated successfully. Aug 12 23:52:34.637225 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:52:34.639532 systemd-logind[1413]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:52:34.654976 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:36036.service - OpenSSH per-connection server daemon (10.0.0.1:36036). Aug 12 23:52:34.657031 systemd-logind[1413]: Removed session 3. Aug 12 23:52:34.695808 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 36036 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:34.697222 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:34.702119 systemd-logind[1413]: New session 4 of user core. Aug 12 23:52:34.721784 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 12 23:52:34.779934 sshd[1559]: pam_unix(sshd:session): session closed for user core Aug 12 23:52:34.800723 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:36036.service: Deactivated successfully. Aug 12 23:52:34.802556 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:52:34.807354 systemd-logind[1413]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:52:34.814004 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:36042.service - OpenSSH per-connection server daemon (10.0.0.1:36042). Aug 12 23:52:34.815275 systemd-logind[1413]: Removed session 4. Aug 12 23:52:34.852062 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 36042 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:34.853615 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:34.858055 systemd-logind[1413]: New session 5 of user core. Aug 12 23:52:34.875819 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 12 23:52:34.949149 sudo[1569]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:52:34.949820 sudo[1569]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:52:34.969017 sudo[1569]: pam_unix(sudo:session): session closed for user root Aug 12 23:52:34.971218 sshd[1566]: pam_unix(sshd:session): session closed for user core Aug 12 23:52:34.978364 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:36042.service: Deactivated successfully. Aug 12 23:52:34.980812 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:52:34.984489 systemd-logind[1413]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:52:34.999014 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:36044.service - OpenSSH per-connection server daemon (10.0.0.1:36044). Aug 12 23:52:35.001984 systemd-logind[1413]: Removed session 5. Aug 12 23:52:35.036270 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 36044 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:35.038049 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:35.042652 systemd-logind[1413]: New session 6 of user core. Aug 12 23:52:35.052745 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 12 23:52:35.105577 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:52:35.106185 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:52:35.110767 sudo[1578]: pam_unix(sudo:session): session closed for user root Aug 12 23:52:35.116904 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 12 23:52:35.117211 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:52:35.141884 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 12 23:52:35.143930 auditctl[1581]: No rules Aug 12 23:52:35.144363 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:52:35.146661 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 12 23:52:35.151793 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 12 23:52:35.205078 augenrules[1599]: No rules Aug 12 23:52:35.205734 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 12 23:52:35.207767 sudo[1577]: pam_unix(sudo:session): session closed for user root Aug 12 23:52:35.209796 sshd[1574]: pam_unix(sshd:session): session closed for user core Aug 12 23:52:35.217533 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:36044.service: Deactivated successfully. Aug 12 23:52:35.219498 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:52:35.223781 systemd-logind[1413]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:52:35.227226 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:36052.service - OpenSSH per-connection server daemon (10.0.0.1:36052). Aug 12 23:52:35.228674 systemd-logind[1413]: Removed session 6. Aug 12 23:52:35.269294 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 36052 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:35.270750 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:35.276648 systemd-logind[1413]: New session 7 of user core. Aug 12 23:52:35.289759 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 12 23:52:35.347842 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:52:35.348235 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:52:35.689782 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 12 23:52:35.689897 (dockerd)[1628]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 12 23:52:36.017023 dockerd[1628]: time="2025-08-12T23:52:36.016871037Z" level=info msg="Starting up" Aug 12 23:52:36.243837 dockerd[1628]: time="2025-08-12T23:52:36.243766249Z" level=info msg="Loading containers: start." Aug 12 23:52:36.380662 kernel: Initializing XFRM netlink socket Aug 12 23:52:36.466389 systemd-networkd[1372]: docker0: Link UP Aug 12 23:52:36.488037 dockerd[1628]: time="2025-08-12T23:52:36.487994101Z" level=info msg="Loading containers: done." Aug 12 23:52:36.505488 dockerd[1628]: time="2025-08-12T23:52:36.505407519Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:52:36.505720 dockerd[1628]: time="2025-08-12T23:52:36.505567813Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 12 23:52:36.505720 dockerd[1628]: time="2025-08-12T23:52:36.505694215Z" level=info msg="Daemon has completed initialization" Aug 12 23:52:36.545549 dockerd[1628]: time="2025-08-12T23:52:36.545380796Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:52:36.545683 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 12 23:52:37.092553 containerd[1436]: time="2025-08-12T23:52:37.092481363Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 12 23:52:37.734952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4053800840.mount: Deactivated successfully. Aug 12 23:52:38.684018 containerd[1436]: time="2025-08-12T23:52:38.683953914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:38.684600 containerd[1436]: time="2025-08-12T23:52:38.684567356Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=27352096" Aug 12 23:52:38.685619 containerd[1436]: time="2025-08-12T23:52:38.685561050Z" level=info msg="ImageCreate event name:\"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:38.690805 containerd[1436]: time="2025-08-12T23:52:38.690729193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:38.691949 containerd[1436]: time="2025-08-12T23:52:38.691902276Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"27348894\" in 1.599349684s" Aug 12 23:52:38.691949 containerd[1436]: time="2025-08-12T23:52:38.691950333Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\"" Aug 12 23:52:38.696011 containerd[1436]: time="2025-08-12T23:52:38.695786604Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 12 23:52:39.811430 containerd[1436]: time="2025-08-12T23:52:39.811378032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:39.812254 containerd[1436]: time="2025-08-12T23:52:39.812202521Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=23537848" Aug 12 23:52:39.813038 containerd[1436]: time="2025-08-12T23:52:39.813007138Z" level=info msg="ImageCreate event name:\"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:39.815959 containerd[1436]: time="2025-08-12T23:52:39.815910063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:39.817409 containerd[1436]: time="2025-08-12T23:52:39.817371343Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"25092764\" in 1.1215394s" Aug 12 23:52:39.817607 containerd[1436]: time="2025-08-12T23:52:39.817495219Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\"" Aug 12 23:52:39.818006 containerd[1436]: time="2025-08-12T23:52:39.817935348Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 12 23:52:41.202033 containerd[1436]: time="2025-08-12T23:52:41.201781859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:41.205747 containerd[1436]: time="2025-08-12T23:52:41.205576096Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=18293526" Aug 12 23:52:41.208772 containerd[1436]: time="2025-08-12T23:52:41.208405667Z" level=info msg="ImageCreate event name:\"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:41.211075 containerd[1436]: time="2025-08-12T23:52:41.211005041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:41.212220 containerd[1436]: time="2025-08-12T23:52:41.212193346Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"19848460\" in 1.394226408s" Aug 12 23:52:41.212285 containerd[1436]: time="2025-08-12T23:52:41.212226135Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\"" Aug 12 23:52:41.212814 containerd[1436]: time="2025-08-12T23:52:41.212682395Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 12 23:52:41.852041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:52:41.861753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:41.992940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:41.998689 (kubelet)[1850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:52:42.039751 kubelet[1850]: E0812 23:52:42.039692 1850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:52:42.043941 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:52:42.044093 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:52:42.151258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3716288898.mount: Deactivated successfully. Aug 12 23:52:42.634051 containerd[1436]: time="2025-08-12T23:52:42.633760297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:42.636228 containerd[1436]: time="2025-08-12T23:52:42.636174260Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=28199474" Aug 12 23:52:42.637405 containerd[1436]: time="2025-08-12T23:52:42.637351271Z" level=info msg="ImageCreate event name:\"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:42.641267 containerd[1436]: time="2025-08-12T23:52:42.640289919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:42.641267 containerd[1436]: time="2025-08-12T23:52:42.641091126Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"28198491\" in 1.428374021s" Aug 12 23:52:42.641267 containerd[1436]: time="2025-08-12T23:52:42.641133170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\"" Aug 12 23:52:42.641841 containerd[1436]: time="2025-08-12T23:52:42.641810435Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 12 23:52:43.286926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508690764.mount: Deactivated successfully. Aug 12 23:52:44.025761 containerd[1436]: time="2025-08-12T23:52:44.025711706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:44.026782 containerd[1436]: time="2025-08-12T23:52:44.026528986Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Aug 12 23:52:44.027501 containerd[1436]: time="2025-08-12T23:52:44.027466628Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:44.031285 containerd[1436]: time="2025-08-12T23:52:44.031214089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:44.032669 containerd[1436]: time="2025-08-12T23:52:44.032562363Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.390713807s" Aug 12 23:52:44.032669 containerd[1436]: time="2025-08-12T23:52:44.032607528Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Aug 12 23:52:44.033095 containerd[1436]: time="2025-08-12T23:52:44.033057152Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:52:44.465583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999079869.mount: Deactivated successfully. Aug 12 23:52:44.477954 containerd[1436]: time="2025-08-12T23:52:44.477900654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:44.479348 containerd[1436]: time="2025-08-12T23:52:44.479306201Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 12 23:52:44.480785 containerd[1436]: time="2025-08-12T23:52:44.480721944Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:44.483733 containerd[1436]: time="2025-08-12T23:52:44.483669530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:44.484550 containerd[1436]: time="2025-08-12T23:52:44.484443796Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 451.351515ms" Aug 12 23:52:44.484550 containerd[1436]: time="2025-08-12T23:52:44.484480796Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 12 23:52:44.484936 containerd[1436]: time="2025-08-12T23:52:44.484909550Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 12 23:52:45.012934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280434430.mount: Deactivated successfully. Aug 12 23:52:46.371605 containerd[1436]: time="2025-08-12T23:52:46.370882778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:46.371957 containerd[1436]: time="2025-08-12T23:52:46.371682982Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Aug 12 23:52:46.372457 containerd[1436]: time="2025-08-12T23:52:46.372409987Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:46.376945 containerd[1436]: time="2025-08-12T23:52:46.376882535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:46.378209 containerd[1436]: time="2025-08-12T23:52:46.378164951Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.893222253s" Aug 12 23:52:46.378209 containerd[1436]: time="2025-08-12T23:52:46.378212355Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Aug 12 23:52:51.981395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:52.005906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:52.042126 systemd[1]: Reloading requested from client PID 2005 ('systemctl') (unit session-7.scope)... Aug 12 23:52:52.042144 systemd[1]: Reloading... Aug 12 23:52:52.135997 zram_generator::config[2044]: No configuration found. Aug 12 23:52:52.327904 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:52:52.395500 systemd[1]: Reloading finished in 352 ms. Aug 12 23:52:52.440079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:52.444045 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:52.444866 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:52:52.445741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:52.447648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:52.561979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:52.567330 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:52:52.604065 kubelet[2091]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:52:52.604065 kubelet[2091]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:52:52.604065 kubelet[2091]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:52:52.604426 kubelet[2091]: I0812 23:52:52.604106 2091 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:52:53.841526 kubelet[2091]: I0812 23:52:53.841120 2091 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 12 23:52:53.841526 kubelet[2091]: I0812 23:52:53.841158 2091 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:52:53.841526 kubelet[2091]: I0812 23:52:53.841391 2091 server.go:956] "Client rotation is on, will bootstrap in background" Aug 12 23:52:53.887971 kubelet[2091]: E0812 23:52:53.887910 2091 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 12 23:52:53.891425 kubelet[2091]: I0812 23:52:53.891008 2091 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:52:53.898138 kubelet[2091]: E0812 23:52:53.898082 2091 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:52:53.898138 kubelet[2091]: I0812 23:52:53.898131 2091 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:52:53.901455 kubelet[2091]: I0812 23:52:53.901409 2091 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:52:53.903503 kubelet[2091]: I0812 23:52:53.903439 2091 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:52:53.903682 kubelet[2091]: I0812 23:52:53.903495 2091 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:52:53.903771 kubelet[2091]: I0812 23:52:53.903736 2091 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:52:53.903771 kubelet[2091]: I0812 23:52:53.903745 2091 container_manager_linux.go:303] "Creating device plugin manager" Aug 12 23:52:53.904615 kubelet[2091]: I0812 23:52:53.904583 2091 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:52:53.921506 kubelet[2091]: I0812 23:52:53.921460 2091 kubelet.go:480] "Attempting to sync node with API server" Aug 12 23:52:53.921506 kubelet[2091]: I0812 23:52:53.921508 2091 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:52:53.921662 kubelet[2091]: I0812 23:52:53.921549 2091 kubelet.go:386] "Adding apiserver pod source" Aug 12 23:52:53.923556 kubelet[2091]: I0812 23:52:53.922889 2091 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:52:53.924142 kubelet[2091]: I0812 23:52:53.924115 2091 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 12 23:52:53.924344 kubelet[2091]: E0812 23:52:53.924174 2091 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 12 23:52:53.924460 kubelet[2091]: E0812 23:52:53.924385 2091 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 12 23:52:53.925064 kubelet[2091]: I0812 23:52:53.925041 2091 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 12 23:52:53.927851 kubelet[2091]: W0812 23:52:53.927831 2091 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:52:53.934104 kubelet[2091]: I0812 23:52:53.934045 2091 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:52:53.934207 kubelet[2091]: I0812 23:52:53.934128 2091 server.go:1289] "Started kubelet" Aug 12 23:52:53.940912 kubelet[2091]: I0812 23:52:53.940850 2091 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:52:53.941303 kubelet[2091]: I0812 23:52:53.941287 2091 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:52:53.941730 kubelet[2091]: I0812 23:52:53.941704 2091 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:52:53.945598 kubelet[2091]: I0812 23:52:53.945556 2091 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:52:53.946749 kubelet[2091]: I0812 23:52:53.946700 2091 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:52:53.948146 kubelet[2091]: I0812 23:52:53.948088 2091 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:52:53.948270 kubelet[2091]: E0812 23:52:53.948244 2091 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:52:53.948437 kubelet[2091]: I0812 23:52:53.948404 2091 server.go:317] "Adding debug handlers to kubelet server" Aug 12 23:52:53.948837 kubelet[2091]: E0812 23:52:53.948805 2091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Aug 12 23:52:53.948952 kubelet[2091]: I0812 23:52:53.948930 2091 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:52:53.949003 kubelet[2091]: I0812 23:52:53.948994 2091 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:52:53.949401 kubelet[2091]: E0812 23:52:53.949363 2091 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 12 23:52:53.951265 kubelet[2091]: I0812 23:52:53.950424 2091 factory.go:223] Registration of the systemd container factory successfully Aug 12 23:52:53.951265 kubelet[2091]: I0812 23:52:53.950530 2091 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:52:53.952284 kubelet[2091]: E0812 23:52:53.952254 2091 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:52:53.953231 kubelet[2091]: E0812 23:52:53.950245 2091 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2a1ab1f4fd85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:52:53.934079365 +0000 UTC m=+1.362987603,LastTimestamp:2025-08-12 23:52:53.934079365 +0000 UTC m=+1.362987603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:52:53.958794 kubelet[2091]: I0812 23:52:53.958758 2091 factory.go:223] Registration of the containerd container factory successfully Aug 12 23:52:53.974128 kubelet[2091]: I0812 23:52:53.974101 2091 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:52:53.974128 kubelet[2091]: I0812 23:52:53.974120 2091 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:52:53.974305 kubelet[2091]: I0812 23:52:53.974142 2091 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:52:54.046397 kubelet[2091]: I0812 23:52:54.046357 2091 policy_none.go:49] "None policy: Start" Aug 12 23:52:54.046397 kubelet[2091]: I0812 23:52:54.046390 2091 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:52:54.046397 kubelet[2091]: I0812 23:52:54.046403 2091 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:52:54.048646 kubelet[2091]: E0812 23:52:54.048620 2091 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:52:54.049597 kubelet[2091]: I0812 23:52:54.049541 2091 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 12 23:52:54.051480 kubelet[2091]: I0812 23:52:54.051439 2091 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 12 23:52:54.051480 kubelet[2091]: I0812 23:52:54.051477 2091 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 12 23:52:54.051598 kubelet[2091]: I0812 23:52:54.051498 2091 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:52:54.051598 kubelet[2091]: I0812 23:52:54.051505 2091 kubelet.go:2436] "Starting kubelet main sync loop" Aug 12 23:52:54.051598 kubelet[2091]: E0812 23:52:54.051576 2091 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:52:54.055781 kubelet[2091]: E0812 23:52:54.055724 2091 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 12 23:52:54.056144 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 12 23:52:54.068106 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 12 23:52:54.071050 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 12 23:52:54.081659 kubelet[2091]: E0812 23:52:54.081413 2091 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 12 23:52:54.081659 kubelet[2091]: I0812 23:52:54.081656 2091 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:52:54.081792 kubelet[2091]: I0812 23:52:54.081669 2091 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:52:54.081980 kubelet[2091]: I0812 23:52:54.081959 2091 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:52:54.082719 kubelet[2091]: E0812 23:52:54.082694 2091 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:52:54.082792 kubelet[2091]: E0812 23:52:54.082728 2091 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 12 23:52:54.149761 kubelet[2091]: E0812 23:52:54.149712 2091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Aug 12 23:52:54.167694 systemd[1]: Created slice kubepods-burstable-pod78eecf08727bcc67062fe6c6b50c13ed.slice - libcontainer container kubepods-burstable-pod78eecf08727bcc67062fe6c6b50c13ed.slice. Aug 12 23:52:54.184526 kubelet[2091]: I0812 23:52:54.184467 2091 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:52:54.184924 kubelet[2091]: E0812 23:52:54.184891 2091 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Aug 12 23:52:54.184924 kubelet[2091]: E0812 23:52:54.184905 2091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:54.188188 systemd[1]: Created slice kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice - libcontainer container kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice. Aug 12 23:52:54.190114 kubelet[2091]: E0812 23:52:54.189918 2091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:54.192411 systemd[1]: Created slice kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice - libcontainer container kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice. Aug 12 23:52:54.193894 kubelet[2091]: E0812 23:52:54.193851 2091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:54.350317 kubelet[2091]: I0812 23:52:54.350247 2091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:54.350317 kubelet[2091]: I0812 23:52:54.350287 2091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:54.350317 kubelet[2091]: I0812 23:52:54.350319 2091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:54.350497 kubelet[2091]: I0812 23:52:54.350336 2091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78eecf08727bcc67062fe6c6b50c13ed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"78eecf08727bcc67062fe6c6b50c13ed\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:54.350497 kubelet[2091]: I0812 23:52:54.350369 2091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:54.350497 kubelet[2091]: I0812 23:52:54.350399 2091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:54.350497 kubelet[2091]: I0812 23:52:54.350415 2091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78eecf08727bcc67062fe6c6b50c13ed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"78eecf08727bcc67062fe6c6b50c13ed\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:54.350497 kubelet[2091]: I0812 23:52:54.350430 2091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78eecf08727bcc67062fe6c6b50c13ed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"78eecf08727bcc67062fe6c6b50c13ed\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:54.350644 kubelet[2091]: I0812 23:52:54.350448 2091 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:54.386446 kubelet[2091]: I0812 23:52:54.386397 2091 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:52:54.386807 kubelet[2091]: E0812 23:52:54.386737 2091 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Aug 12 23:52:54.486506 kubelet[2091]: E0812 23:52:54.486360 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:54.487176 containerd[1436]: time="2025-08-12T23:52:54.487126627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:78eecf08727bcc67062fe6c6b50c13ed,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:54.490763 kubelet[2091]: E0812 23:52:54.490733 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:54.491261 containerd[1436]: time="2025-08-12T23:52:54.491217766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:54.494821 kubelet[2091]: E0812 23:52:54.494784 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:54.495545 containerd[1436]: time="2025-08-12T23:52:54.495285932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:54.551018 kubelet[2091]: E0812 23:52:54.550931 2091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Aug 12 23:52:54.790613 kubelet[2091]: I0812 23:52:54.788353 2091 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:52:54.790613 kubelet[2091]: E0812 23:52:54.788694 2091 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Aug 12 23:52:54.970384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586330113.mount: Deactivated successfully. Aug 12 23:52:54.978946 containerd[1436]: time="2025-08-12T23:52:54.978895710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:54.980085 containerd[1436]: time="2025-08-12T23:52:54.980057318Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 12 23:52:54.983370 containerd[1436]: time="2025-08-12T23:52:54.983294182Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:54.984777 containerd[1436]: time="2025-08-12T23:52:54.984732677Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:54.985687 containerd[1436]: time="2025-08-12T23:52:54.985650281Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:52:54.987146 containerd[1436]: time="2025-08-12T23:52:54.987073554Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:54.987867 containerd[1436]: time="2025-08-12T23:52:54.987830339Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:52:54.989170 containerd[1436]: time="2025-08-12T23:52:54.989102103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:54.991296 containerd[1436]: time="2025-08-12T23:52:54.990050392Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.840499ms" Aug 12 23:52:54.994242 containerd[1436]: time="2025-08-12T23:52:54.994185761Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.88425ms" Aug 12 23:52:54.994957 containerd[1436]: time="2025-08-12T23:52:54.994788760Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 499.423118ms" Aug 12 23:52:55.114039 containerd[1436]: time="2025-08-12T23:52:55.113747770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:55.114171 containerd[1436]: time="2025-08-12T23:52:55.113994926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:55.114171 containerd[1436]: time="2025-08-12T23:52:55.114055945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:55.114171 containerd[1436]: time="2025-08-12T23:52:55.114072969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:55.114236 containerd[1436]: time="2025-08-12T23:52:55.114159163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:55.114653 containerd[1436]: time="2025-08-12T23:52:55.114367038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:55.114653 containerd[1436]: time="2025-08-12T23:52:55.114401684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:55.114653 containerd[1436]: time="2025-08-12T23:52:55.114502904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:55.114653 containerd[1436]: time="2025-08-12T23:52:55.113468686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:55.114653 containerd[1436]: time="2025-08-12T23:52:55.114365639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:55.114653 containerd[1436]: time="2025-08-12T23:52:55.114379746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:55.114653 containerd[1436]: time="2025-08-12T23:52:55.114471815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:55.140779 systemd[1]: Started cri-containerd-7307142b03f7673f35e4bc7b162097d10d2fa9f2318e2d819552af2badcbc26e.scope - libcontainer container 7307142b03f7673f35e4bc7b162097d10d2fa9f2318e2d819552af2badcbc26e. Aug 12 23:52:55.145036 systemd[1]: Started cri-containerd-613e84504e5954d24ef5cb373b9f7120c95198ba75f4218b803683bc2bf8a51e.scope - libcontainer container 613e84504e5954d24ef5cb373b9f7120c95198ba75f4218b803683bc2bf8a51e. Aug 12 23:52:55.146049 systemd[1]: Started cri-containerd-c12fe60eabdc7761b917ba7d88958793ad5b6ed4b5b35b401daedab1bd5b5958.scope - libcontainer container c12fe60eabdc7761b917ba7d88958793ad5b6ed4b5b35b401daedab1bd5b5958. Aug 12 23:52:55.178395 kubelet[2091]: E0812 23:52:55.178349 2091 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 12 23:52:55.181272 containerd[1436]: time="2025-08-12T23:52:55.181162759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:78eecf08727bcc67062fe6c6b50c13ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"613e84504e5954d24ef5cb373b9f7120c95198ba75f4218b803683bc2bf8a51e\"" Aug 12 23:52:55.182155 kubelet[2091]: E0812 23:52:55.182118 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:55.182240 containerd[1436]: time="2025-08-12T23:52:55.181196845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7307142b03f7673f35e4bc7b162097d10d2fa9f2318e2d819552af2badcbc26e\"" Aug 12 23:52:55.183167 kubelet[2091]: E0812 23:52:55.183146 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:55.188547 containerd[1436]: time="2025-08-12T23:52:55.188308499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,} returns sandbox id \"c12fe60eabdc7761b917ba7d88958793ad5b6ed4b5b35b401daedab1bd5b5958\"" Aug 12 23:52:55.191870 containerd[1436]: time="2025-08-12T23:52:55.191813755Z" level=info msg="CreateContainer within sandbox \"7307142b03f7673f35e4bc7b162097d10d2fa9f2318e2d819552af2badcbc26e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:52:55.192266 kubelet[2091]: E0812 23:52:55.192232 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:55.192351 containerd[1436]: time="2025-08-12T23:52:55.192324930Z" level=info msg="CreateContainer within sandbox \"613e84504e5954d24ef5cb373b9f7120c95198ba75f4218b803683bc2bf8a51e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:52:55.198407 containerd[1436]: time="2025-08-12T23:52:55.198366640Z" level=info msg="CreateContainer within sandbox \"c12fe60eabdc7761b917ba7d88958793ad5b6ed4b5b35b401daedab1bd5b5958\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:52:55.214929 containerd[1436]: time="2025-08-12T23:52:55.214879285Z" level=info msg="CreateContainer within sandbox \"613e84504e5954d24ef5cb373b9f7120c95198ba75f4218b803683bc2bf8a51e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f144debbdb5b059b4e97074896dfe5f11f653515cb1abd8ce6cb08205cc7d68c\"" Aug 12 23:52:55.215720 containerd[1436]: time="2025-08-12T23:52:55.215690963Z" level=info msg="StartContainer for \"f144debbdb5b059b4e97074896dfe5f11f653515cb1abd8ce6cb08205cc7d68c\"" Aug 12 23:52:55.226431 containerd[1436]: time="2025-08-12T23:52:55.226349351Z" level=info msg="CreateContainer within sandbox \"7307142b03f7673f35e4bc7b162097d10d2fa9f2318e2d819552af2badcbc26e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"87e8a0eedb9b695fea524c704a6d702c49754eb6647b67fe43df32543c54c831\"" Aug 12 23:52:55.226947 containerd[1436]: time="2025-08-12T23:52:55.226840186Z" level=info msg="StartContainer for \"87e8a0eedb9b695fea524c704a6d702c49754eb6647b67fe43df32543c54c831\"" Aug 12 23:52:55.227006 kubelet[2091]: E0812 23:52:55.226975 2091 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 12 23:52:55.227994 containerd[1436]: time="2025-08-12T23:52:55.227953726Z" level=info msg="CreateContainer within sandbox \"c12fe60eabdc7761b917ba7d88958793ad5b6ed4b5b35b401daedab1bd5b5958\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b912455984e1562474f4d5620cceac21217912d303531b95a1b3bb4a98fa89ea\"" Aug 12 23:52:55.228368 containerd[1436]: time="2025-08-12T23:52:55.228336188Z" level=info msg="StartContainer for \"b912455984e1562474f4d5620cceac21217912d303531b95a1b3bb4a98fa89ea\"" Aug 12 23:52:55.245784 systemd[1]: Started cri-containerd-f144debbdb5b059b4e97074896dfe5f11f653515cb1abd8ce6cb08205cc7d68c.scope - libcontainer container f144debbdb5b059b4e97074896dfe5f11f653515cb1abd8ce6cb08205cc7d68c. Aug 12 23:52:55.260753 systemd[1]: Started cri-containerd-87e8a0eedb9b695fea524c704a6d702c49754eb6647b67fe43df32543c54c831.scope - libcontainer container 87e8a0eedb9b695fea524c704a6d702c49754eb6647b67fe43df32543c54c831. Aug 12 23:52:55.261868 systemd[1]: Started cri-containerd-b912455984e1562474f4d5620cceac21217912d303531b95a1b3bb4a98fa89ea.scope - libcontainer container b912455984e1562474f4d5620cceac21217912d303531b95a1b3bb4a98fa89ea. Aug 12 23:52:55.304366 containerd[1436]: time="2025-08-12T23:52:55.302054110Z" level=info msg="StartContainer for \"b912455984e1562474f4d5620cceac21217912d303531b95a1b3bb4a98fa89ea\" returns successfully" Aug 12 23:52:55.304366 containerd[1436]: time="2025-08-12T23:52:55.302152412Z" level=info msg="StartContainer for \"f144debbdb5b059b4e97074896dfe5f11f653515cb1abd8ce6cb08205cc7d68c\" returns successfully" Aug 12 23:52:55.339404 containerd[1436]: time="2025-08-12T23:52:55.336619037Z" level=info msg="StartContainer for \"87e8a0eedb9b695fea524c704a6d702c49754eb6647b67fe43df32543c54c831\" returns successfully" Aug 12 23:52:55.355132 kubelet[2091]: E0812 23:52:55.352316 2091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="1.6s" Aug 12 23:52:55.468953 kubelet[2091]: E0812 23:52:55.468801 2091 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 12 23:52:55.495565 kubelet[2091]: E0812 23:52:55.495493 2091 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 12 23:52:55.591031 kubelet[2091]: I0812 23:52:55.590713 2091 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:52:55.591563 kubelet[2091]: E0812 23:52:55.591455 2091 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Aug 12 23:52:56.059857 kubelet[2091]: E0812 23:52:56.059780 2091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:56.060146 kubelet[2091]: E0812 23:52:56.059961 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:56.062766 kubelet[2091]: E0812 23:52:56.062581 2091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:56.062766 kubelet[2091]: E0812 23:52:56.062696 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:56.065055 kubelet[2091]: E0812 23:52:56.065023 2091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:56.065377 kubelet[2091]: E0812 23:52:56.065333 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:57.070733 kubelet[2091]: E0812 23:52:57.070611 2091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:57.071472 kubelet[2091]: E0812 23:52:57.071279 2091 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:57.071472 kubelet[2091]: E0812 23:52:57.071322 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:57.071697 kubelet[2091]: E0812 23:52:57.071555 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:57.193341 kubelet[2091]: I0812 23:52:57.193042 2091 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:52:57.711529 kubelet[2091]: E0812 23:52:57.711473 2091 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 12 23:52:57.857706 kubelet[2091]: I0812 23:52:57.857658 2091 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 12 23:52:57.857706 kubelet[2091]: E0812 23:52:57.857708 2091 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 12 23:52:57.868692 kubelet[2091]: E0812 23:52:57.868637 2091 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:52:57.969425 kubelet[2091]: E0812 23:52:57.969244 2091 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:52:58.048571 kubelet[2091]: I0812 23:52:58.048508 2091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:58.055460 kubelet[2091]: E0812 23:52:58.055425 2091 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:58.055460 kubelet[2091]: I0812 23:52:58.055456 2091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:58.057308 kubelet[2091]: E0812 23:52:58.057280 2091 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:58.057308 kubelet[2091]: I0812 23:52:58.057307 2091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:58.058938 kubelet[2091]: E0812 23:52:58.058912 2091 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:58.068236 kubelet[2091]: I0812 23:52:58.068215 2091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:58.070198 kubelet[2091]: E0812 23:52:58.070132 2091 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:58.070320 kubelet[2091]: E0812 23:52:58.070303 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:58.927530 kubelet[2091]: I0812 23:52:58.927316 2091 apiserver.go:52] "Watching apiserver" Aug 12 23:52:58.949926 kubelet[2091]: I0812 23:52:58.949878 2091 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:52:59.301151 kubelet[2091]: I0812 23:52:59.300845 2091 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:59.308095 kubelet[2091]: E0812 23:52:59.308054 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:00.071654 kubelet[2091]: E0812 23:53:00.071581 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:00.100371 systemd[1]: Reloading requested from client PID 2380 ('systemctl') (unit session-7.scope)... Aug 12 23:53:00.100787 systemd[1]: Reloading... Aug 12 23:53:00.187649 zram_generator::config[2419]: No configuration found. Aug 12 23:53:00.290961 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:53:00.374256 systemd[1]: Reloading finished in 273 ms. Aug 12 23:53:00.411233 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:53:00.429977 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:53:00.430205 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:53:00.430266 systemd[1]: kubelet.service: Consumed 1.794s CPU time, 131.3M memory peak, 0B memory swap peak. Aug 12 23:53:00.441962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:53:00.562441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:53:00.573895 (kubelet)[2461]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:53:00.620844 kubelet[2461]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:53:00.620844 kubelet[2461]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:53:00.620844 kubelet[2461]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:53:00.621230 kubelet[2461]: I0812 23:53:00.620871 2461 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:53:00.628067 kubelet[2461]: I0812 23:53:00.627926 2461 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 12 23:53:00.628067 kubelet[2461]: I0812 23:53:00.627964 2461 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:53:00.628220 kubelet[2461]: I0812 23:53:00.628199 2461 server.go:956] "Client rotation is on, will bootstrap in background" Aug 12 23:53:00.629506 kubelet[2461]: I0812 23:53:00.629475 2461 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 12 23:53:00.634022 kubelet[2461]: I0812 23:53:00.633970 2461 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:53:00.638107 kubelet[2461]: E0812 23:53:00.638039 2461 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:53:00.638107 kubelet[2461]: I0812 23:53:00.638083 2461 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:53:00.640771 kubelet[2461]: I0812 23:53:00.640747 2461 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:53:00.640995 kubelet[2461]: I0812 23:53:00.640966 2461 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:53:00.641151 kubelet[2461]: I0812 23:53:00.640995 2461 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:53:00.641236 kubelet[2461]: I0812 23:53:00.641161 2461 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:53:00.641236 kubelet[2461]: I0812 23:53:00.641169 2461 container_manager_linux.go:303] "Creating device plugin manager" Aug 12 23:53:00.641236 kubelet[2461]: I0812 23:53:00.641216 2461 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:53:00.641376 kubelet[2461]: I0812 23:53:00.641365 2461 kubelet.go:480] "Attempting to sync node with API server" Aug 12 23:53:00.641403 kubelet[2461]: I0812 23:53:00.641380 2461 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:53:00.641440 kubelet[2461]: I0812 23:53:00.641405 2461 kubelet.go:386] "Adding apiserver pod source" Aug 12 23:53:00.641440 kubelet[2461]: I0812 23:53:00.641421 2461 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:53:00.642704 kubelet[2461]: I0812 23:53:00.642644 2461 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 12 23:53:00.643330 kubelet[2461]: I0812 23:53:00.643309 2461 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 12 23:53:00.649097 kubelet[2461]: I0812 23:53:00.648961 2461 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:53:00.649759 kubelet[2461]: I0812 23:53:00.649707 2461 server.go:1289] "Started kubelet" Aug 12 23:53:00.657605 kubelet[2461]: I0812 23:53:00.653670 2461 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:53:00.657605 kubelet[2461]: I0812 23:53:00.654578 2461 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:53:00.657605 kubelet[2461]: I0812 23:53:00.654874 2461 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:53:00.657605 kubelet[2461]: I0812 23:53:00.655006 2461 server.go:317] "Adding debug handlers to kubelet server" Aug 12 23:53:00.658364 kubelet[2461]: I0812 23:53:00.658338 2461 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:53:00.662857 kubelet[2461]: I0812 23:53:00.660472 2461 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:53:00.662857 kubelet[2461]: E0812 23:53:00.660662 2461 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:53:00.662857 kubelet[2461]: I0812 23:53:00.660865 2461 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:53:00.662857 kubelet[2461]: I0812 23:53:00.661663 2461 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:53:00.662857 kubelet[2461]: I0812 23:53:00.661813 2461 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:53:00.666840 kubelet[2461]: I0812 23:53:00.666811 2461 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:53:00.669933 kubelet[2461]: I0812 23:53:00.669908 2461 factory.go:223] Registration of the containerd container factory successfully Aug 12 23:53:00.670119 kubelet[2461]: I0812 23:53:00.670108 2461 factory.go:223] Registration of the systemd container factory successfully Aug 12 23:53:00.670273 kubelet[2461]: E0812 23:53:00.669965 2461 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:53:00.679861 kubelet[2461]: I0812 23:53:00.679771 2461 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 12 23:53:00.680978 kubelet[2461]: I0812 23:53:00.680912 2461 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 12 23:53:00.680978 kubelet[2461]: I0812 23:53:00.680939 2461 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 12 23:53:00.680978 kubelet[2461]: I0812 23:53:00.680963 2461 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:53:00.680978 kubelet[2461]: I0812 23:53:00.680970 2461 kubelet.go:2436] "Starting kubelet main sync loop" Aug 12 23:53:00.681119 kubelet[2461]: E0812 23:53:00.681014 2461 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:53:00.717399 kubelet[2461]: I0812 23:53:00.717375 2461 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:53:00.717399 kubelet[2461]: I0812 23:53:00.717390 2461 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:53:00.717559 kubelet[2461]: I0812 23:53:00.717414 2461 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:53:00.717593 kubelet[2461]: I0812 23:53:00.717577 2461 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:53:00.717626 kubelet[2461]: I0812 23:53:00.717588 2461 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:53:00.717626 kubelet[2461]: I0812 23:53:00.717606 2461 policy_none.go:49] "None policy: Start" Aug 12 23:53:00.717626 kubelet[2461]: I0812 23:53:00.717624 2461 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:53:00.717687 kubelet[2461]: I0812 23:53:00.717635 2461 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:53:00.717739 kubelet[2461]: I0812 23:53:00.717726 2461 state_mem.go:75] "Updated machine memory state" Aug 12 23:53:00.721850 kubelet[2461]: E0812 23:53:00.721808 2461 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 12 23:53:00.722529 kubelet[2461]: I0812 23:53:00.722019 2461 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:53:00.722529 kubelet[2461]: I0812 23:53:00.722038 2461 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:53:00.722630 kubelet[2461]: I0812 23:53:00.722589 2461 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:53:00.724216 kubelet[2461]: E0812 23:53:00.724158 2461 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:53:00.781775 kubelet[2461]: I0812 23:53:00.781727 2461 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:53:00.781911 kubelet[2461]: I0812 23:53:00.781742 2461 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:53:00.781911 kubelet[2461]: I0812 23:53:00.781897 2461 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:53:00.796908 kubelet[2461]: E0812 23:53:00.796864 2461 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:53:00.827820 kubelet[2461]: I0812 23:53:00.827786 2461 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:53:00.836435 kubelet[2461]: I0812 23:53:00.836366 2461 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 12 23:53:00.837481 kubelet[2461]: I0812 23:53:00.837451 2461 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 12 23:53:00.963565 kubelet[2461]: I0812 23:53:00.963414 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78eecf08727bcc67062fe6c6b50c13ed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"78eecf08727bcc67062fe6c6b50c13ed\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:53:00.963565 kubelet[2461]: I0812 23:53:00.963458 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78eecf08727bcc67062fe6c6b50c13ed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"78eecf08727bcc67062fe6c6b50c13ed\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:53:00.963565 kubelet[2461]: I0812 23:53:00.963480 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:53:00.963565 kubelet[2461]: I0812 23:53:00.963509 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:53:00.963565 kubelet[2461]: I0812 23:53:00.963538 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78eecf08727bcc67062fe6c6b50c13ed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"78eecf08727bcc67062fe6c6b50c13ed\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:53:00.963764 kubelet[2461]: I0812 23:53:00.963553 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:53:00.963764 kubelet[2461]: I0812 23:53:00.963566 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:53:00.963764 kubelet[2461]: I0812 23:53:00.963583 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:53:00.963764 kubelet[2461]: I0812 23:53:00.963598 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:53:01.088573 kubelet[2461]: E0812 23:53:01.088372 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:01.099104 kubelet[2461]: E0812 23:53:01.098844 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:01.099104 kubelet[2461]: E0812 23:53:01.097733 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:01.119103 sudo[2502]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 12 23:53:01.119394 sudo[2502]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 12 23:53:01.591337 sudo[2502]: pam_unix(sudo:session): session closed for user root Aug 12 23:53:01.642970 kubelet[2461]: I0812 23:53:01.642561 2461 apiserver.go:52] "Watching apiserver" Aug 12 23:53:01.662899 kubelet[2461]: I0812 23:53:01.662091 2461 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:53:01.699691 kubelet[2461]: I0812 23:53:01.698237 2461 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:53:01.699691 kubelet[2461]: I0812 23:53:01.698639 2461 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:53:01.699691 kubelet[2461]: E0812 23:53:01.699004 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:01.706069 kubelet[2461]: E0812 23:53:01.706020 2461 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:53:01.706216 kubelet[2461]: E0812 23:53:01.706195 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:01.707787 kubelet[2461]: E0812 23:53:01.707756 2461 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 12 23:53:01.707910 kubelet[2461]: E0812 23:53:01.707890 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:01.722036 kubelet[2461]: I0812 23:53:01.721801 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.721785205 podStartE2EDuration="1.721785205s" podCreationTimestamp="2025-08-12 23:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:53:01.721672606 +0000 UTC m=+1.142859868" watchObservedRunningTime="2025-08-12 23:53:01.721785205 +0000 UTC m=+1.142972467" Aug 12 23:53:01.740817 kubelet[2461]: I0812 23:53:01.740753 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.740735836 podStartE2EDuration="2.740735836s" podCreationTimestamp="2025-08-12 23:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:53:01.730404154 +0000 UTC m=+1.151591376" watchObservedRunningTime="2025-08-12 23:53:01.740735836 +0000 UTC m=+1.161923058" Aug 12 23:53:02.699922 kubelet[2461]: E0812 23:53:02.699835 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:02.699922 kubelet[2461]: E0812 23:53:02.699877 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:03.702604 kubelet[2461]: E0812 23:53:03.702091 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:03.725975 sudo[1610]: pam_unix(sudo:session): session closed for user root Aug 12 23:53:03.733667 sshd[1607]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:03.737596 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:36052.service: Deactivated successfully. Aug 12 23:53:03.740117 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:53:03.743628 systemd[1]: session-7.scope: Consumed 8.463s CPU time, 158.0M memory peak, 0B memory swap peak. Aug 12 23:53:03.744399 systemd-logind[1413]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:53:03.745560 systemd-logind[1413]: Removed session 7. Aug 12 23:53:05.831178 kubelet[2461]: E0812 23:53:05.831110 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:05.873628 kubelet[2461]: I0812 23:53:05.873468 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.87344956 podStartE2EDuration="5.87344956s" podCreationTimestamp="2025-08-12 23:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:53:01.741072071 +0000 UTC m=+1.162259333" watchObservedRunningTime="2025-08-12 23:53:05.87344956 +0000 UTC m=+5.294636822" Aug 12 23:53:06.590481 kubelet[2461]: I0812 23:53:06.590450 2461 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:53:06.605761 containerd[1436]: time="2025-08-12T23:53:06.605587864Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:53:06.606216 kubelet[2461]: I0812 23:53:06.605995 2461 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:53:06.710097 kubelet[2461]: E0812 23:53:06.709246 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:07.711184 kubelet[2461]: E0812 23:53:07.711138 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:07.794193 systemd[1]: Created slice kubepods-besteffort-pod2fcd7154_5d8c_4ab3_a2a4_2936d2a46737.slice - libcontainer container kubepods-besteffort-pod2fcd7154_5d8c_4ab3_a2a4_2936d2a46737.slice. Aug 12 23:53:07.810653 kubelet[2461]: I0812 23:53:07.809781 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fcd7154-5d8c-4ab3-a2a4-2936d2a46737-lib-modules\") pod \"kube-proxy-gd2cw\" (UID: \"2fcd7154-5d8c-4ab3-a2a4-2936d2a46737\") " pod="kube-system/kube-proxy-gd2cw" Aug 12 23:53:07.811320 kubelet[2461]: I0812 23:53:07.810825 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cilium-config-path\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.811320 kubelet[2461]: I0812 23:53:07.810855 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-host-proc-sys-kernel\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.811320 kubelet[2461]: I0812 23:53:07.810875 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2fcd7154-5d8c-4ab3-a2a4-2936d2a46737-kube-proxy\") pod \"kube-proxy-gd2cw\" (UID: \"2fcd7154-5d8c-4ab3-a2a4-2936d2a46737\") " pod="kube-system/kube-proxy-gd2cw" Aug 12 23:53:07.811320 kubelet[2461]: I0812 23:53:07.810888 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fcd7154-5d8c-4ab3-a2a4-2936d2a46737-xtables-lock\") pod \"kube-proxy-gd2cw\" (UID: \"2fcd7154-5d8c-4ab3-a2a4-2936d2a46737\") " pod="kube-system/kube-proxy-gd2cw" Aug 12 23:53:07.811320 kubelet[2461]: I0812 23:53:07.810902 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hklt\" (UniqueName: \"kubernetes.io/projected/2fcd7154-5d8c-4ab3-a2a4-2936d2a46737-kube-api-access-4hklt\") pod \"kube-proxy-gd2cw\" (UID: \"2fcd7154-5d8c-4ab3-a2a4-2936d2a46737\") " pod="kube-system/kube-proxy-gd2cw" Aug 12 23:53:07.811487 kubelet[2461]: I0812 23:53:07.810920 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cilium-run\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.811487 kubelet[2461]: I0812 23:53:07.810934 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-hostproc\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.811487 kubelet[2461]: I0812 23:53:07.810963 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-bpf-maps\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.811487 kubelet[2461]: I0812 23:53:07.810978 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cilium-cgroup\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.811487 kubelet[2461]: I0812 23:53:07.810992 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cni-path\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.811487 kubelet[2461]: I0812 23:53:07.811008 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-etc-cni-netd\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.811624 kubelet[2461]: I0812 23:53:07.811022 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-xtables-lock\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.811624 kubelet[2461]: I0812 23:53:07.811035 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-clustermesh-secrets\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.811624 kubelet[2461]: I0812 23:53:07.811051 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-host-proc-sys-net\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.811624 kubelet[2461]: I0812 23:53:07.811065 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2f2p\" (UniqueName: \"kubernetes.io/projected/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-kube-api-access-l2f2p\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.811624 kubelet[2461]: I0812 23:53:07.811081 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-lib-modules\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.811624 kubelet[2461]: I0812 23:53:07.811095 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-hubble-tls\") pod \"cilium-mlbv9\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " pod="kube-system/cilium-mlbv9" Aug 12 23:53:07.813124 systemd[1]: Created slice kubepods-burstable-pod8a7f5e73_05b4_4d41_b8cc_16cc0429a940.slice - libcontainer container kubepods-burstable-pod8a7f5e73_05b4_4d41_b8cc_16cc0429a940.slice. Aug 12 23:53:07.820130 systemd[1]: Created slice kubepods-besteffort-pod9a9309d4_28ce_4dd3_8a21_da55f493afc8.slice - libcontainer container kubepods-besteffort-pod9a9309d4_28ce_4dd3_8a21_da55f493afc8.slice. Aug 12 23:53:07.912040 kubelet[2461]: I0812 23:53:07.911976 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f88c9\" (UniqueName: \"kubernetes.io/projected/9a9309d4-28ce-4dd3-8a21-da55f493afc8-kube-api-access-f88c9\") pod \"cilium-operator-6c4d7847fc-kjmqj\" (UID: \"9a9309d4-28ce-4dd3-8a21-da55f493afc8\") " pod="kube-system/cilium-operator-6c4d7847fc-kjmqj" Aug 12 23:53:07.912040 kubelet[2461]: I0812 23:53:07.912026 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a9309d4-28ce-4dd3-8a21-da55f493afc8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-kjmqj\" (UID: \"9a9309d4-28ce-4dd3-8a21-da55f493afc8\") " pod="kube-system/cilium-operator-6c4d7847fc-kjmqj" Aug 12 23:53:08.107306 kubelet[2461]: E0812 23:53:08.107271 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:08.108291 containerd[1436]: time="2025-08-12T23:53:08.107859301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gd2cw,Uid:2fcd7154-5d8c-4ab3-a2a4-2936d2a46737,Namespace:kube-system,Attempt:0,}" Aug 12 23:53:08.117476 kubelet[2461]: E0812 23:53:08.117159 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:08.117784 containerd[1436]: time="2025-08-12T23:53:08.117709959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mlbv9,Uid:8a7f5e73-05b4-4d41-b8cc-16cc0429a940,Namespace:kube-system,Attempt:0,}" Aug 12 23:53:08.124759 kubelet[2461]: E0812 23:53:08.124726 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:08.125302 containerd[1436]: time="2025-08-12T23:53:08.125260681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kjmqj,Uid:9a9309d4-28ce-4dd3-8a21-da55f493afc8,Namespace:kube-system,Attempt:0,}" Aug 12 23:53:08.153554 containerd[1436]: time="2025-08-12T23:53:08.153128314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:53:08.153554 containerd[1436]: time="2025-08-12T23:53:08.153188474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:53:08.153554 containerd[1436]: time="2025-08-12T23:53:08.153202833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:08.153554 containerd[1436]: time="2025-08-12T23:53:08.153307232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:08.153554 containerd[1436]: time="2025-08-12T23:53:08.152886557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:53:08.153554 containerd[1436]: time="2025-08-12T23:53:08.152962356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:53:08.153554 containerd[1436]: time="2025-08-12T23:53:08.152977316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:08.153554 containerd[1436]: time="2025-08-12T23:53:08.153066195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:08.165658 containerd[1436]: time="2025-08-12T23:53:08.164140321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:53:08.165658 containerd[1436]: time="2025-08-12T23:53:08.164197240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:53:08.165658 containerd[1436]: time="2025-08-12T23:53:08.164928113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:08.165658 containerd[1436]: time="2025-08-12T23:53:08.165113031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:08.180706 systemd[1]: Started cri-containerd-3a006a11125079f63ddfd15c400112ceeb3337b74507b57f2b5a1106c639c0ae.scope - libcontainer container 3a006a11125079f63ddfd15c400112ceeb3337b74507b57f2b5a1106c639c0ae. Aug 12 23:53:08.182123 systemd[1]: Started cri-containerd-457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45.scope - libcontainer container 457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45. Aug 12 23:53:08.187526 systemd[1]: Started cri-containerd-7423762876f3ea362fc44e95b0947ed318d62357219dbcdcc8f07d50e54606e0.scope - libcontainer container 7423762876f3ea362fc44e95b0947ed318d62357219dbcdcc8f07d50e54606e0. Aug 12 23:53:08.217703 containerd[1436]: time="2025-08-12T23:53:08.217639329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gd2cw,Uid:2fcd7154-5d8c-4ab3-a2a4-2936d2a46737,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a006a11125079f63ddfd15c400112ceeb3337b74507b57f2b5a1106c639c0ae\"" Aug 12 23:53:08.218712 kubelet[2461]: E0812 23:53:08.218685 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:08.221174 containerd[1436]: time="2025-08-12T23:53:08.221136133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mlbv9,Uid:8a7f5e73-05b4-4d41-b8cc-16cc0429a940,Namespace:kube-system,Attempt:0,} returns sandbox id \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\"" Aug 12 23:53:08.224564 kubelet[2461]: E0812 23:53:08.224499 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:08.226776 containerd[1436]: time="2025-08-12T23:53:08.226728436Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 12 23:53:08.229385 containerd[1436]: time="2025-08-12T23:53:08.228367899Z" level=info msg="CreateContainer within sandbox \"3a006a11125079f63ddfd15c400112ceeb3337b74507b57f2b5a1106c639c0ae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:53:08.239187 containerd[1436]: time="2025-08-12T23:53:08.239063709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kjmqj,Uid:9a9309d4-28ce-4dd3-8a21-da55f493afc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7423762876f3ea362fc44e95b0947ed318d62357219dbcdcc8f07d50e54606e0\"" Aug 12 23:53:08.243013 kubelet[2461]: E0812 23:53:08.240083 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:08.269967 containerd[1436]: time="2025-08-12T23:53:08.269911031Z" level=info msg="CreateContainer within sandbox \"3a006a11125079f63ddfd15c400112ceeb3337b74507b57f2b5a1106c639c0ae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0f7bad9cccef3fd037d4bfdff4acf70a24030e47cce4aa354a7e3f99b0c2d87b\"" Aug 12 23:53:08.271963 containerd[1436]: time="2025-08-12T23:53:08.270604544Z" level=info msg="StartContainer for \"0f7bad9cccef3fd037d4bfdff4acf70a24030e47cce4aa354a7e3f99b0c2d87b\"" Aug 12 23:53:08.292717 systemd[1]: Started cri-containerd-0f7bad9cccef3fd037d4bfdff4acf70a24030e47cce4aa354a7e3f99b0c2d87b.scope - libcontainer container 0f7bad9cccef3fd037d4bfdff4acf70a24030e47cce4aa354a7e3f99b0c2d87b. Aug 12 23:53:08.320830 containerd[1436]: time="2025-08-12T23:53:08.320776067Z" level=info msg="StartContainer for \"0f7bad9cccef3fd037d4bfdff4acf70a24030e47cce4aa354a7e3f99b0c2d87b\" returns successfully" Aug 12 23:53:08.718740 kubelet[2461]: E0812 23:53:08.718333 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:09.471551 kubelet[2461]: E0812 23:53:09.471113 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:09.489813 kubelet[2461]: I0812 23:53:09.489717 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gd2cw" podStartSLOduration=2.489700326 podStartE2EDuration="2.489700326s" podCreationTimestamp="2025-08-12 23:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:53:08.732387786 +0000 UTC m=+8.153575088" watchObservedRunningTime="2025-08-12 23:53:09.489700326 +0000 UTC m=+8.910887588" Aug 12 23:53:09.720603 kubelet[2461]: E0812 23:53:09.720392 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:09.871252 kubelet[2461]: E0812 23:53:09.871128 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:10.724305 kubelet[2461]: E0812 23:53:10.724268 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:10.724842 kubelet[2461]: E0812 23:53:10.724809 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:14.010157 update_engine[1421]: I20250812 23:53:14.009549 1421 update_attempter.cc:509] Updating boot flags... Aug 12 23:53:14.051634 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2853) Aug 12 23:53:14.129553 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2855) Aug 12 23:53:15.291285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount538174944.mount: Deactivated successfully. Aug 12 23:53:16.712413 containerd[1436]: time="2025-08-12T23:53:16.712339561Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Aug 12 23:53:16.716565 containerd[1436]: time="2025-08-12T23:53:16.716507333Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.489737538s" Aug 12 23:53:16.716565 containerd[1436]: time="2025-08-12T23:53:16.716569132Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 12 23:53:16.718144 containerd[1436]: time="2025-08-12T23:53:16.718116002Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 12 23:53:16.738543 containerd[1436]: time="2025-08-12T23:53:16.738472903Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:53:16.739497 containerd[1436]: time="2025-08-12T23:53:16.739323577Z" level=info msg="CreateContainer within sandbox \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:53:16.740295 containerd[1436]: time="2025-08-12T23:53:16.740237371Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:53:16.782000 containerd[1436]: time="2025-08-12T23:53:16.781956006Z" level=info msg="CreateContainer within sandbox \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815\"" Aug 12 23:53:16.782786 containerd[1436]: time="2025-08-12T23:53:16.782620441Z" level=info msg="StartContainer for \"96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815\"" Aug 12 23:53:16.810738 systemd[1]: Started cri-containerd-96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815.scope - libcontainer container 96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815. Aug 12 23:53:16.852427 containerd[1436]: time="2025-08-12T23:53:16.852370365Z" level=info msg="StartContainer for \"96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815\" returns successfully" Aug 12 23:53:16.906960 systemd[1]: cri-containerd-96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815.scope: Deactivated successfully. Aug 12 23:53:17.032807 containerd[1436]: time="2025-08-12T23:53:17.031909588Z" level=info msg="shim disconnected" id=96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815 namespace=k8s.io Aug 12 23:53:17.032807 containerd[1436]: time="2025-08-12T23:53:17.031969228Z" level=warning msg="cleaning up after shim disconnected" id=96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815 namespace=k8s.io Aug 12 23:53:17.032807 containerd[1436]: time="2025-08-12T23:53:17.031978828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:53:17.744662 kubelet[2461]: E0812 23:53:17.744617 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:17.764318 containerd[1436]: time="2025-08-12T23:53:17.764168382Z" level=info msg="CreateContainer within sandbox \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 12 23:53:17.781823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815-rootfs.mount: Deactivated successfully. Aug 12 23:53:17.817589 containerd[1436]: time="2025-08-12T23:53:17.817541074Z" level=info msg="CreateContainer within sandbox \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba\"" Aug 12 23:53:17.820481 containerd[1436]: time="2025-08-12T23:53:17.819408022Z" level=info msg="StartContainer for \"687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba\"" Aug 12 23:53:17.865787 systemd[1]: Started cri-containerd-687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba.scope - libcontainer container 687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba. Aug 12 23:53:17.903619 containerd[1436]: time="2025-08-12T23:53:17.903568234Z" level=info msg="StartContainer for \"687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba\" returns successfully" Aug 12 23:53:17.937707 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:53:17.938191 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:53:17.938282 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:53:17.943970 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:53:17.944194 systemd[1]: cri-containerd-687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba.scope: Deactivated successfully. Aug 12 23:53:17.982013 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:53:17.983184 containerd[1436]: time="2025-08-12T23:53:17.982369762Z" level=info msg="shim disconnected" id=687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba namespace=k8s.io Aug 12 23:53:17.983313 containerd[1436]: time="2025-08-12T23:53:17.983195596Z" level=warning msg="cleaning up after shim disconnected" id=687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba namespace=k8s.io Aug 12 23:53:17.983313 containerd[1436]: time="2025-08-12T23:53:17.983208756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:53:18.207001 containerd[1436]: time="2025-08-12T23:53:18.206933602Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:53:18.211611 containerd[1436]: time="2025-08-12T23:53:18.211565573Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Aug 12 23:53:18.212755 containerd[1436]: time="2025-08-12T23:53:18.212726806Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:53:18.214359 containerd[1436]: time="2025-08-12T23:53:18.214077317Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.495924195s" Aug 12 23:53:18.214359 containerd[1436]: time="2025-08-12T23:53:18.214119157Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 12 23:53:18.224124 containerd[1436]: time="2025-08-12T23:53:18.223996256Z" level=info msg="CreateContainer within sandbox \"7423762876f3ea362fc44e95b0947ed318d62357219dbcdcc8f07d50e54606e0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 12 23:53:18.243557 containerd[1436]: time="2025-08-12T23:53:18.243487375Z" level=info msg="CreateContainer within sandbox \"7423762876f3ea362fc44e95b0947ed318d62357219dbcdcc8f07d50e54606e0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\"" Aug 12 23:53:18.244543 containerd[1436]: time="2025-08-12T23:53:18.244054011Z" level=info msg="StartContainer for \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\"" Aug 12 23:53:18.274770 systemd[1]: Started cri-containerd-6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d.scope - libcontainer container 6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d. Aug 12 23:53:18.299390 containerd[1436]: time="2025-08-12T23:53:18.299343508Z" level=info msg="StartContainer for \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\" returns successfully" Aug 12 23:53:18.748326 kubelet[2461]: E0812 23:53:18.748274 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:18.753810 kubelet[2461]: E0812 23:53:18.753749 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:18.764822 containerd[1436]: time="2025-08-12T23:53:18.764760579Z" level=info msg="CreateContainer within sandbox \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 12 23:53:18.787076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba-rootfs.mount: Deactivated successfully. Aug 12 23:53:18.836363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1138192464.mount: Deactivated successfully. Aug 12 23:53:18.865471 kubelet[2461]: I0812 23:53:18.865377 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-kjmqj" podStartSLOduration=1.8919774839999999 podStartE2EDuration="11.865357834s" podCreationTimestamp="2025-08-12 23:53:07 +0000 UTC" firstStartedPulling="2025-08-12 23:53:08.241752041 +0000 UTC m=+7.662939303" lastFinishedPulling="2025-08-12 23:53:18.215132391 +0000 UTC m=+17.636319653" observedRunningTime="2025-08-12 23:53:18.789543385 +0000 UTC m=+18.210730687" watchObservedRunningTime="2025-08-12 23:53:18.865357834 +0000 UTC m=+18.286545096" Aug 12 23:53:18.875966 containerd[1436]: time="2025-08-12T23:53:18.875730970Z" level=info msg="CreateContainer within sandbox \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83\"" Aug 12 23:53:18.879838 containerd[1436]: time="2025-08-12T23:53:18.877820957Z" level=info msg="StartContainer for \"c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83\"" Aug 12 23:53:18.957853 systemd[1]: Started cri-containerd-c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83.scope - libcontainer container c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83. Aug 12 23:53:19.021433 containerd[1436]: time="2025-08-12T23:53:19.021298232Z" level=info msg="StartContainer for \"c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83\" returns successfully" Aug 12 23:53:19.035594 systemd[1]: cri-containerd-c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83.scope: Deactivated successfully. Aug 12 23:53:19.170472 containerd[1436]: time="2025-08-12T23:53:19.167980963Z" level=info msg="shim disconnected" id=c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83 namespace=k8s.io Aug 12 23:53:19.170472 containerd[1436]: time="2025-08-12T23:53:19.168045602Z" level=warning msg="cleaning up after shim disconnected" id=c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83 namespace=k8s.io Aug 12 23:53:19.170472 containerd[1436]: time="2025-08-12T23:53:19.168054602Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:53:19.762346 kubelet[2461]: E0812 23:53:19.762295 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:19.763036 kubelet[2461]: E0812 23:53:19.762842 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:19.785194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83-rootfs.mount: Deactivated successfully. Aug 12 23:53:19.810805 containerd[1436]: time="2025-08-12T23:53:19.810753714Z" level=info msg="CreateContainer within sandbox \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 12 23:53:19.857358 containerd[1436]: time="2025-08-12T23:53:19.857296518Z" level=info msg="CreateContainer within sandbox \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e\"" Aug 12 23:53:19.858207 containerd[1436]: time="2025-08-12T23:53:19.858166153Z" level=info msg="StartContainer for \"3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e\"" Aug 12 23:53:19.899755 systemd[1]: Started cri-containerd-3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e.scope - libcontainer container 3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e. Aug 12 23:53:19.926745 systemd[1]: cri-containerd-3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e.scope: Deactivated successfully. Aug 12 23:53:19.927489 containerd[1436]: time="2025-08-12T23:53:19.927328103Z" level=info msg="StartContainer for \"3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e\" returns successfully" Aug 12 23:53:19.992101 containerd[1436]: time="2025-08-12T23:53:19.992029759Z" level=info msg="shim disconnected" id=3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e namespace=k8s.io Aug 12 23:53:19.992101 containerd[1436]: time="2025-08-12T23:53:19.992091079Z" level=warning msg="cleaning up after shim disconnected" id=3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e namespace=k8s.io Aug 12 23:53:19.992101 containerd[1436]: time="2025-08-12T23:53:19.992100719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:53:20.768695 kubelet[2461]: E0812 23:53:20.768441 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:20.779552 containerd[1436]: time="2025-08-12T23:53:20.779106221Z" level=info msg="CreateContainer within sandbox \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 12 23:53:20.785322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e-rootfs.mount: Deactivated successfully. Aug 12 23:53:20.807072 containerd[1436]: time="2025-08-12T23:53:20.807008903Z" level=info msg="CreateContainer within sandbox \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\"" Aug 12 23:53:20.808257 containerd[1436]: time="2025-08-12T23:53:20.808190817Z" level=info msg="StartContainer for \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\"" Aug 12 23:53:20.841759 systemd[1]: Started cri-containerd-9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90.scope - libcontainer container 9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90. Aug 12 23:53:20.898501 containerd[1436]: time="2025-08-12T23:53:20.898440386Z" level=info msg="StartContainer for \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\" returns successfully" Aug 12 23:53:21.161540 kubelet[2461]: I0812 23:53:21.161496 2461 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 12 23:53:21.227167 systemd[1]: Created slice kubepods-burstable-pod06f50a3b_4437_4777_9aa6_3f1ff9c79a9f.slice - libcontainer container kubepods-burstable-pod06f50a3b_4437_4777_9aa6_3f1ff9c79a9f.slice. Aug 12 23:53:21.246431 systemd[1]: Created slice kubepods-burstable-pod7dc767b9_3ac2_4e5b_a3e4_59f2eaa1932c.slice - libcontainer container kubepods-burstable-pod7dc767b9_3ac2_4e5b_a3e4_59f2eaa1932c.slice. Aug 12 23:53:21.403829 kubelet[2461]: I0812 23:53:21.403772 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kzmk\" (UniqueName: \"kubernetes.io/projected/7dc767b9-3ac2-4e5b-a3e4-59f2eaa1932c-kube-api-access-6kzmk\") pod \"coredns-674b8bbfcf-vk97l\" (UID: \"7dc767b9-3ac2-4e5b-a3e4-59f2eaa1932c\") " pod="kube-system/coredns-674b8bbfcf-vk97l" Aug 12 23:53:21.403829 kubelet[2461]: I0812 23:53:21.403834 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06f50a3b-4437-4777-9aa6-3f1ff9c79a9f-config-volume\") pod \"coredns-674b8bbfcf-r9sfr\" (UID: \"06f50a3b-4437-4777-9aa6-3f1ff9c79a9f\") " pod="kube-system/coredns-674b8bbfcf-r9sfr" Aug 12 23:53:21.404051 kubelet[2461]: I0812 23:53:21.403854 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dc767b9-3ac2-4e5b-a3e4-59f2eaa1932c-config-volume\") pod \"coredns-674b8bbfcf-vk97l\" (UID: \"7dc767b9-3ac2-4e5b-a3e4-59f2eaa1932c\") " pod="kube-system/coredns-674b8bbfcf-vk97l" Aug 12 23:53:21.404051 kubelet[2461]: I0812 23:53:21.403871 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-666xj\" (UniqueName: \"kubernetes.io/projected/06f50a3b-4437-4777-9aa6-3f1ff9c79a9f-kube-api-access-666xj\") pod \"coredns-674b8bbfcf-r9sfr\" (UID: \"06f50a3b-4437-4777-9aa6-3f1ff9c79a9f\") " pod="kube-system/coredns-674b8bbfcf-r9sfr" Aug 12 23:53:21.541111 kubelet[2461]: E0812 23:53:21.540886 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:21.542296 containerd[1436]: time="2025-08-12T23:53:21.542245315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9sfr,Uid:06f50a3b-4437-4777-9aa6-3f1ff9c79a9f,Namespace:kube-system,Attempt:0,}" Aug 12 23:53:21.552446 kubelet[2461]: E0812 23:53:21.552406 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:21.554584 containerd[1436]: time="2025-08-12T23:53:21.554530769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vk97l,Uid:7dc767b9-3ac2-4e5b-a3e4-59f2eaa1932c,Namespace:kube-system,Attempt:0,}" Aug 12 23:53:21.773756 kubelet[2461]: E0812 23:53:21.773725 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:21.790115 kubelet[2461]: I0812 23:53:21.790047 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mlbv9" podStartSLOduration=6.298296892 podStartE2EDuration="14.790032174s" podCreationTimestamp="2025-08-12 23:53:07 +0000 UTC" firstStartedPulling="2025-08-12 23:53:08.226217161 +0000 UTC m=+7.647404383" lastFinishedPulling="2025-08-12 23:53:16.717952443 +0000 UTC m=+16.139139665" observedRunningTime="2025-08-12 23:53:21.789745976 +0000 UTC m=+21.210933278" watchObservedRunningTime="2025-08-12 23:53:21.790032174 +0000 UTC m=+21.211219436" Aug 12 23:53:22.775570 kubelet[2461]: E0812 23:53:22.775178 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:23.633244 systemd-networkd[1372]: cilium_host: Link UP Aug 12 23:53:23.633377 systemd-networkd[1372]: cilium_net: Link UP Aug 12 23:53:23.633501 systemd-networkd[1372]: cilium_net: Gained carrier Aug 12 23:53:23.634609 systemd-networkd[1372]: cilium_host: Gained carrier Aug 12 23:53:23.744669 systemd-networkd[1372]: cilium_vxlan: Link UP Aug 12 23:53:23.744676 systemd-networkd[1372]: cilium_vxlan: Gained carrier Aug 12 23:53:23.777538 kubelet[2461]: E0812 23:53:23.777490 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:23.835836 systemd-networkd[1372]: cilium_host: Gained IPv6LL Aug 12 23:53:23.939693 systemd-networkd[1372]: cilium_net: Gained IPv6LL Aug 12 23:53:24.110586 kernel: NET: Registered PF_ALG protocol family Aug 12 23:53:24.826212 systemd-networkd[1372]: lxc_health: Link UP Aug 12 23:53:24.826856 systemd-networkd[1372]: lxc_health: Gained carrier Aug 12 23:53:25.233898 systemd-networkd[1372]: lxc7da26ca39a94: Link UP Aug 12 23:53:25.244588 kernel: eth0: renamed from tmp95bfe Aug 12 23:53:25.255967 systemd-networkd[1372]: lxc87191d459405: Link UP Aug 12 23:53:25.276647 systemd-networkd[1372]: lxc7da26ca39a94: Gained carrier Aug 12 23:53:25.278722 kernel: eth0: renamed from tmp0aa32 Aug 12 23:53:25.291707 systemd-networkd[1372]: lxc87191d459405: Gained carrier Aug 12 23:53:25.381477 systemd-networkd[1372]: cilium_vxlan: Gained IPv6LL Aug 12 23:53:26.151282 kubelet[2461]: E0812 23:53:26.150721 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:26.533136 systemd-networkd[1372]: lxc_health: Gained IPv6LL Aug 12 23:53:26.661627 systemd-networkd[1372]: lxc7da26ca39a94: Gained IPv6LL Aug 12 23:53:26.785761 kubelet[2461]: E0812 23:53:26.785357 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:27.107768 systemd-networkd[1372]: lxc87191d459405: Gained IPv6LL Aug 12 23:53:27.787181 kubelet[2461]: E0812 23:53:27.787075 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:29.439544 containerd[1436]: time="2025-08-12T23:53:29.439239187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:53:29.439544 containerd[1436]: time="2025-08-12T23:53:29.439289547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:53:29.439544 containerd[1436]: time="2025-08-12T23:53:29.439300587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:29.439544 containerd[1436]: time="2025-08-12T23:53:29.439378106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:29.450212 containerd[1436]: time="2025-08-12T23:53:29.448881669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:53:29.450212 containerd[1436]: time="2025-08-12T23:53:29.448939069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:53:29.450212 containerd[1436]: time="2025-08-12T23:53:29.449255708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:29.450212 containerd[1436]: time="2025-08-12T23:53:29.449367387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:29.475762 systemd[1]: Started cri-containerd-0aa324cb22fda2dcdf4a66dad1d5f2f95bfa93cb328f0a1e624e2cf68a6dfeaf.scope - libcontainer container 0aa324cb22fda2dcdf4a66dad1d5f2f95bfa93cb328f0a1e624e2cf68a6dfeaf. Aug 12 23:53:29.477587 systemd[1]: Started cri-containerd-95bfe6805389421959075dd8f85bda1bad8025284a1c6fcdc9a0396a990a95de.scope - libcontainer container 95bfe6805389421959075dd8f85bda1bad8025284a1c6fcdc9a0396a990a95de. Aug 12 23:53:29.490859 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:53:29.500939 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:53:29.516413 containerd[1436]: time="2025-08-12T23:53:29.516346405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9sfr,Uid:06f50a3b-4437-4777-9aa6-3f1ff9c79a9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0aa324cb22fda2dcdf4a66dad1d5f2f95bfa93cb328f0a1e624e2cf68a6dfeaf\"" Aug 12 23:53:29.517699 kubelet[2461]: E0812 23:53:29.517668 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:29.523866 containerd[1436]: time="2025-08-12T23:53:29.523810776Z" level=info msg="CreateContainer within sandbox \"0aa324cb22fda2dcdf4a66dad1d5f2f95bfa93cb328f0a1e624e2cf68a6dfeaf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:53:29.529059 containerd[1436]: time="2025-08-12T23:53:29.529005276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vk97l,Uid:7dc767b9-3ac2-4e5b-a3e4-59f2eaa1932c,Namespace:kube-system,Attempt:0,} returns sandbox id \"95bfe6805389421959075dd8f85bda1bad8025284a1c6fcdc9a0396a990a95de\"" Aug 12 23:53:29.529765 kubelet[2461]: E0812 23:53:29.529737 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:29.538712 containerd[1436]: time="2025-08-12T23:53:29.538282199Z" level=info msg="CreateContainer within sandbox \"95bfe6805389421959075dd8f85bda1bad8025284a1c6fcdc9a0396a990a95de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:53:29.548648 containerd[1436]: time="2025-08-12T23:53:29.548595319Z" level=info msg="CreateContainer within sandbox \"0aa324cb22fda2dcdf4a66dad1d5f2f95bfa93cb328f0a1e624e2cf68a6dfeaf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f98e23a4fe3ab5aa941e5bbe1b79c556b649103bf07428b0ed8f144d0b7ff811\"" Aug 12 23:53:29.549464 containerd[1436]: time="2025-08-12T23:53:29.549413796Z" level=info msg="StartContainer for \"f98e23a4fe3ab5aa941e5bbe1b79c556b649103bf07428b0ed8f144d0b7ff811\"" Aug 12 23:53:29.551694 containerd[1436]: time="2025-08-12T23:53:29.551655907Z" level=info msg="CreateContainer within sandbox \"95bfe6805389421959075dd8f85bda1bad8025284a1c6fcdc9a0396a990a95de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25d449268d054ff3bdbef559dc9c1a009bb49a33ac0563fea672edb08d139ce1\"" Aug 12 23:53:29.553107 containerd[1436]: time="2025-08-12T23:53:29.553058062Z" level=info msg="StartContainer for \"25d449268d054ff3bdbef559dc9c1a009bb49a33ac0563fea672edb08d139ce1\"" Aug 12 23:53:29.579766 systemd[1]: Started cri-containerd-f98e23a4fe3ab5aa941e5bbe1b79c556b649103bf07428b0ed8f144d0b7ff811.scope - libcontainer container f98e23a4fe3ab5aa941e5bbe1b79c556b649103bf07428b0ed8f144d0b7ff811. Aug 12 23:53:29.584548 systemd[1]: Started cri-containerd-25d449268d054ff3bdbef559dc9c1a009bb49a33ac0563fea672edb08d139ce1.scope - libcontainer container 25d449268d054ff3bdbef559dc9c1a009bb49a33ac0563fea672edb08d139ce1. Aug 12 23:53:29.621314 containerd[1436]: time="2025-08-12T23:53:29.621230195Z" level=info msg="StartContainer for \"f98e23a4fe3ab5aa941e5bbe1b79c556b649103bf07428b0ed8f144d0b7ff811\" returns successfully" Aug 12 23:53:29.621438 containerd[1436]: time="2025-08-12T23:53:29.621251195Z" level=info msg="StartContainer for \"25d449268d054ff3bdbef559dc9c1a009bb49a33ac0563fea672edb08d139ce1\" returns successfully" Aug 12 23:53:29.797839 kubelet[2461]: E0812 23:53:29.797342 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:29.807444 kubelet[2461]: E0812 23:53:29.802280 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:29.883336 kubelet[2461]: I0812 23:53:29.883119 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vk97l" podStartSLOduration=22.883105411 podStartE2EDuration="22.883105411s" podCreationTimestamp="2025-08-12 23:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:53:29.87800171 +0000 UTC m=+29.299188972" watchObservedRunningTime="2025-08-12 23:53:29.883105411 +0000 UTC m=+29.304292673" Aug 12 23:53:29.979826 kubelet[2461]: I0812 23:53:29.979753 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r9sfr" podStartSLOduration=22.979733672000002 podStartE2EDuration="22.979733672s" podCreationTimestamp="2025-08-12 23:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:53:29.979288874 +0000 UTC m=+29.400476136" watchObservedRunningTime="2025-08-12 23:53:29.979733672 +0000 UTC m=+29.400920934" Aug 12 23:53:30.804160 kubelet[2461]: E0812 23:53:30.804112 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:30.813706 kubelet[2461]: E0812 23:53:30.805948 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:31.807210 kubelet[2461]: E0812 23:53:31.806993 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:31.807210 kubelet[2461]: E0812 23:53:31.807053 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:32.867945 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:50650.service - OpenSSH per-connection server daemon (10.0.0.1:50650). Aug 12 23:53:32.924941 sshd[3883]: Accepted publickey for core from 10.0.0.1 port 50650 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:32.928158 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:32.957711 systemd-logind[1413]: New session 8 of user core. Aug 12 23:53:32.967311 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 12 23:53:33.126492 sshd[3883]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:33.131488 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:50650.service: Deactivated successfully. Aug 12 23:53:33.133492 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:53:33.136484 systemd-logind[1413]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:53:33.138151 systemd-logind[1413]: Removed session 8. Aug 12 23:53:38.140741 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:50664.service - OpenSSH per-connection server daemon (10.0.0.1:50664). Aug 12 23:53:38.190842 sshd[3901]: Accepted publickey for core from 10.0.0.1 port 50664 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:38.192822 sshd[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:38.200221 systemd-logind[1413]: New session 9 of user core. Aug 12 23:53:38.208635 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 12 23:53:38.338144 sshd[3901]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:38.343253 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:50664.service: Deactivated successfully. Aug 12 23:53:38.345889 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:53:38.346806 systemd-logind[1413]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:53:38.347708 systemd-logind[1413]: Removed session 9. Aug 12 23:53:43.359859 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:56308.service - OpenSSH per-connection server daemon (10.0.0.1:56308). Aug 12 23:53:43.400625 sshd[3918]: Accepted publickey for core from 10.0.0.1 port 56308 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:43.402873 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:43.408106 systemd-logind[1413]: New session 10 of user core. Aug 12 23:53:43.415773 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 12 23:53:43.557722 sshd[3918]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:43.566568 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:56308.service: Deactivated successfully. Aug 12 23:53:43.570024 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:53:43.575212 systemd-logind[1413]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:53:43.578512 systemd-logind[1413]: Removed session 10. Aug 12 23:53:48.571653 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:56322.service - OpenSSH per-connection server daemon (10.0.0.1:56322). Aug 12 23:53:48.611975 sshd[3934]: Accepted publickey for core from 10.0.0.1 port 56322 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:48.614164 sshd[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:48.622897 systemd-logind[1413]: New session 11 of user core. Aug 12 23:53:48.634780 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 12 23:53:48.795960 sshd[3934]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:48.814103 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:56322.service: Deactivated successfully. Aug 12 23:53:48.822210 systemd[1]: session-11.scope: Deactivated successfully. Aug 12 23:53:48.828185 systemd-logind[1413]: Session 11 logged out. Waiting for processes to exit. Aug 12 23:53:48.843288 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:56324.service - OpenSSH per-connection server daemon (10.0.0.1:56324). Aug 12 23:53:48.844445 systemd-logind[1413]: Removed session 11. Aug 12 23:53:48.884627 sshd[3949]: Accepted publickey for core from 10.0.0.1 port 56324 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:48.886956 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:48.891342 systemd-logind[1413]: New session 12 of user core. Aug 12 23:53:48.901816 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 12 23:53:49.076062 sshd[3949]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:49.091958 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:56324.service: Deactivated successfully. Aug 12 23:53:49.099571 systemd[1]: session-12.scope: Deactivated successfully. Aug 12 23:53:49.102825 systemd-logind[1413]: Session 12 logged out. Waiting for processes to exit. Aug 12 23:53:49.109298 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:56338.service - OpenSSH per-connection server daemon (10.0.0.1:56338). Aug 12 23:53:49.111299 systemd-logind[1413]: Removed session 12. Aug 12 23:53:49.172052 sshd[3962]: Accepted publickey for core from 10.0.0.1 port 56338 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:49.174892 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:49.181346 systemd-logind[1413]: New session 13 of user core. Aug 12 23:53:49.192790 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 12 23:53:49.333054 sshd[3962]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:49.341391 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:56338.service: Deactivated successfully. Aug 12 23:53:49.346895 systemd[1]: session-13.scope: Deactivated successfully. Aug 12 23:53:49.348355 systemd-logind[1413]: Session 13 logged out. Waiting for processes to exit. Aug 12 23:53:49.349588 systemd-logind[1413]: Removed session 13. Aug 12 23:53:54.347079 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:60294.service - OpenSSH per-connection server daemon (10.0.0.1:60294). Aug 12 23:53:54.399986 sshd[3976]: Accepted publickey for core from 10.0.0.1 port 60294 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:54.401629 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:54.406242 systemd-logind[1413]: New session 14 of user core. Aug 12 23:53:54.419194 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 12 23:53:54.569559 sshd[3976]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:54.576425 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:60294.service: Deactivated successfully. Aug 12 23:53:54.579478 systemd[1]: session-14.scope: Deactivated successfully. Aug 12 23:53:54.581740 systemd-logind[1413]: Session 14 logged out. Waiting for processes to exit. Aug 12 23:53:54.583552 systemd-logind[1413]: Removed session 14. Aug 12 23:53:59.579405 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:60310.service - OpenSSH per-connection server daemon (10.0.0.1:60310). Aug 12 23:53:59.620374 sshd[3991]: Accepted publickey for core from 10.0.0.1 port 60310 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:59.621842 sshd[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:59.626686 systemd-logind[1413]: New session 15 of user core. Aug 12 23:53:59.637763 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 12 23:53:59.762003 sshd[3991]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:59.778315 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:60310.service: Deactivated successfully. Aug 12 23:53:59.781125 systemd[1]: session-15.scope: Deactivated successfully. Aug 12 23:53:59.782688 systemd-logind[1413]: Session 15 logged out. Waiting for processes to exit. Aug 12 23:53:59.784618 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:60326.service - OpenSSH per-connection server daemon (10.0.0.1:60326). Aug 12 23:53:59.786901 systemd-logind[1413]: Removed session 15. Aug 12 23:53:59.826649 sshd[4005]: Accepted publickey for core from 10.0.0.1 port 60326 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:59.828062 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:59.831924 systemd-logind[1413]: New session 16 of user core. Aug 12 23:53:59.840720 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 12 23:54:00.068606 sshd[4005]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:00.076283 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:60326.service: Deactivated successfully. Aug 12 23:54:00.081086 systemd[1]: session-16.scope: Deactivated successfully. Aug 12 23:54:00.083829 systemd-logind[1413]: Session 16 logged out. Waiting for processes to exit. Aug 12 23:54:00.085806 systemd-logind[1413]: Removed session 16. Aug 12 23:54:00.092838 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:60338.service - OpenSSH per-connection server daemon (10.0.0.1:60338). Aug 12 23:54:00.138913 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 60338 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:54:00.140503 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:00.145793 systemd-logind[1413]: New session 17 of user core. Aug 12 23:54:00.154771 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 12 23:54:00.805679 sshd[4018]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:00.815292 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:60338.service: Deactivated successfully. Aug 12 23:54:00.820097 systemd[1]: session-17.scope: Deactivated successfully. Aug 12 23:54:00.822628 systemd-logind[1413]: Session 17 logged out. Waiting for processes to exit. Aug 12 23:54:00.831945 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:60340.service - OpenSSH per-connection server daemon (10.0.0.1:60340). Aug 12 23:54:00.834471 systemd-logind[1413]: Removed session 17. Aug 12 23:54:00.870482 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 60340 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:54:00.871660 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:00.876010 systemd-logind[1413]: New session 18 of user core. Aug 12 23:54:00.886731 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 12 23:54:01.136688 sshd[4040]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:01.148378 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:60340.service: Deactivated successfully. Aug 12 23:54:01.151250 systemd[1]: session-18.scope: Deactivated successfully. Aug 12 23:54:01.157030 systemd-logind[1413]: Session 18 logged out. Waiting for processes to exit. Aug 12 23:54:01.169953 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:60356.service - OpenSSH per-connection server daemon (10.0.0.1:60356). Aug 12 23:54:01.170822 systemd-logind[1413]: Removed session 18. Aug 12 23:54:01.219159 sshd[4053]: Accepted publickey for core from 10.0.0.1 port 60356 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:54:01.220225 sshd[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:01.228329 systemd-logind[1413]: New session 19 of user core. Aug 12 23:54:01.243875 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 12 23:54:01.372349 sshd[4053]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:01.376140 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:60356.service: Deactivated successfully. Aug 12 23:54:01.378403 systemd[1]: session-19.scope: Deactivated successfully. Aug 12 23:54:01.381409 systemd-logind[1413]: Session 19 logged out. Waiting for processes to exit. Aug 12 23:54:01.382365 systemd-logind[1413]: Removed session 19. Aug 12 23:54:06.384880 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:54478.service - OpenSSH per-connection server daemon (10.0.0.1:54478). Aug 12 23:54:06.422292 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 54478 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:54:06.423941 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:06.428285 systemd-logind[1413]: New session 20 of user core. Aug 12 23:54:06.438308 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 12 23:54:06.569561 sshd[4069]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:06.577008 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:54478.service: Deactivated successfully. Aug 12 23:54:06.581549 systemd[1]: session-20.scope: Deactivated successfully. Aug 12 23:54:06.585073 systemd-logind[1413]: Session 20 logged out. Waiting for processes to exit. Aug 12 23:54:06.586802 systemd-logind[1413]: Removed session 20. Aug 12 23:54:11.581242 systemd[1]: Started sshd@20-10.0.0.10:22-10.0.0.1:54490.service - OpenSSH per-connection server daemon (10.0.0.1:54490). Aug 12 23:54:11.633490 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 54490 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:54:11.635101 sshd[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:11.639890 systemd-logind[1413]: New session 21 of user core. Aug 12 23:54:11.649784 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 12 23:54:11.781823 sshd[4085]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:11.785956 systemd[1]: sshd@20-10.0.0.10:22-10.0.0.1:54490.service: Deactivated successfully. Aug 12 23:54:11.787871 systemd[1]: session-21.scope: Deactivated successfully. Aug 12 23:54:11.788658 systemd-logind[1413]: Session 21 logged out. Waiting for processes to exit. Aug 12 23:54:11.789719 systemd-logind[1413]: Removed session 21. Aug 12 23:54:12.681965 kubelet[2461]: E0812 23:54:12.681872 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:14.682471 kubelet[2461]: E0812 23:54:14.681618 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:16.795289 systemd[1]: Started sshd@21-10.0.0.10:22-10.0.0.1:42256.service - OpenSSH per-connection server daemon (10.0.0.1:42256). Aug 12 23:54:16.842422 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 42256 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:54:16.843837 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:16.847669 systemd-logind[1413]: New session 22 of user core. Aug 12 23:54:16.863919 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 12 23:54:16.982790 sshd[4100]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:16.995349 systemd[1]: sshd@21-10.0.0.10:22-10.0.0.1:42256.service: Deactivated successfully. Aug 12 23:54:16.997288 systemd[1]: session-22.scope: Deactivated successfully. Aug 12 23:54:16.998866 systemd-logind[1413]: Session 22 logged out. Waiting for processes to exit. Aug 12 23:54:17.009914 systemd[1]: Started sshd@22-10.0.0.10:22-10.0.0.1:42268.service - OpenSSH per-connection server daemon (10.0.0.1:42268). Aug 12 23:54:17.011438 systemd-logind[1413]: Removed session 22. Aug 12 23:54:17.048708 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 42268 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:54:17.050267 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:17.054937 systemd-logind[1413]: New session 23 of user core. Aug 12 23:54:17.065708 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 12 23:54:19.355952 containerd[1436]: time="2025-08-12T23:54:19.355907474Z" level=info msg="StopContainer for \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\" with timeout 30 (s)" Aug 12 23:54:19.357719 containerd[1436]: time="2025-08-12T23:54:19.357665612Z" level=info msg="Stop container \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\" with signal terminated" Aug 12 23:54:19.382734 systemd[1]: cri-containerd-6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d.scope: Deactivated successfully. Aug 12 23:54:19.388382 containerd[1436]: time="2025-08-12T23:54:19.388191579Z" level=info msg="StopContainer for \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\" with timeout 2 (s)" Aug 12 23:54:19.390385 containerd[1436]: time="2025-08-12T23:54:19.390350602Z" level=info msg="Stop container \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\" with signal terminated" Aug 12 23:54:19.400441 systemd-networkd[1372]: lxc_health: Link DOWN Aug 12 23:54:19.400449 systemd-networkd[1372]: lxc_health: Lost carrier Aug 12 23:54:19.408859 containerd[1436]: time="2025-08-12T23:54:19.408792759Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:54:19.420280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d-rootfs.mount: Deactivated successfully. Aug 12 23:54:19.428066 systemd[1]: cri-containerd-9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90.scope: Deactivated successfully. Aug 12 23:54:19.428838 containerd[1436]: time="2025-08-12T23:54:19.428779813Z" level=info msg="shim disconnected" id=6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d namespace=k8s.io Aug 12 23:54:19.428925 containerd[1436]: time="2025-08-12T23:54:19.428841813Z" level=warning msg="cleaning up after shim disconnected" id=6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d namespace=k8s.io Aug 12 23:54:19.428925 containerd[1436]: time="2025-08-12T23:54:19.428851773Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:19.429555 systemd[1]: cri-containerd-9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90.scope: Consumed 7.972s CPU time. Aug 12 23:54:19.462505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90-rootfs.mount: Deactivated successfully. Aug 12 23:54:19.484950 containerd[1436]: time="2025-08-12T23:54:19.484862772Z" level=info msg="shim disconnected" id=9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90 namespace=k8s.io Aug 12 23:54:19.484950 containerd[1436]: time="2025-08-12T23:54:19.484933573Z" level=warning msg="cleaning up after shim disconnected" id=9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90 namespace=k8s.io Aug 12 23:54:19.484950 containerd[1436]: time="2025-08-12T23:54:19.484942573Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:19.502678 containerd[1436]: time="2025-08-12T23:54:19.502612482Z" level=info msg="StopContainer for \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\" returns successfully" Aug 12 23:54:19.503359 containerd[1436]: time="2025-08-12T23:54:19.503325289Z" level=info msg="StopPodSandbox for \"7423762876f3ea362fc44e95b0947ed318d62357219dbcdcc8f07d50e54606e0\"" Aug 12 23:54:19.503436 containerd[1436]: time="2025-08-12T23:54:19.503374570Z" level=info msg="Container to stop \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:19.505357 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7423762876f3ea362fc44e95b0947ed318d62357219dbcdcc8f07d50e54606e0-shm.mount: Deactivated successfully. Aug 12 23:54:19.514665 systemd[1]: cri-containerd-7423762876f3ea362fc44e95b0947ed318d62357219dbcdcc8f07d50e54606e0.scope: Deactivated successfully. Aug 12 23:54:19.516572 containerd[1436]: time="2025-08-12T23:54:19.516350229Z" level=info msg="StopContainer for \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\" returns successfully" Aug 12 23:54:19.518723 containerd[1436]: time="2025-08-12T23:54:19.518687574Z" level=info msg="StopPodSandbox for \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\"" Aug 12 23:54:19.520862 containerd[1436]: time="2025-08-12T23:54:19.518788655Z" level=info msg="Container to stop \"687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:19.520862 containerd[1436]: time="2025-08-12T23:54:19.518804575Z" level=info msg="Container to stop \"c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:19.520862 containerd[1436]: time="2025-08-12T23:54:19.518823335Z" level=info msg="Container to stop \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:19.520862 containerd[1436]: time="2025-08-12T23:54:19.518833735Z" level=info msg="Container to stop \"96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:19.520862 containerd[1436]: time="2025-08-12T23:54:19.518843015Z" level=info msg="Container to stop \"3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:19.522852 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45-shm.mount: Deactivated successfully. Aug 12 23:54:19.539904 systemd[1]: cri-containerd-457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45.scope: Deactivated successfully. Aug 12 23:54:19.583466 containerd[1436]: time="2025-08-12T23:54:19.583381865Z" level=info msg="shim disconnected" id=7423762876f3ea362fc44e95b0947ed318d62357219dbcdcc8f07d50e54606e0 namespace=k8s.io Aug 12 23:54:19.583466 containerd[1436]: time="2025-08-12T23:54:19.583442426Z" level=warning msg="cleaning up after shim disconnected" id=7423762876f3ea362fc44e95b0947ed318d62357219dbcdcc8f07d50e54606e0 namespace=k8s.io Aug 12 23:54:19.583466 containerd[1436]: time="2025-08-12T23:54:19.583450746Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:19.583957 containerd[1436]: time="2025-08-12T23:54:19.583900391Z" level=info msg="shim disconnected" id=457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45 namespace=k8s.io Aug 12 23:54:19.583957 containerd[1436]: time="2025-08-12T23:54:19.583947671Z" level=warning msg="cleaning up after shim disconnected" id=457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45 namespace=k8s.io Aug 12 23:54:19.584014 containerd[1436]: time="2025-08-12T23:54:19.583956031Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:19.597281 containerd[1436]: time="2025-08-12T23:54:19.597221253Z" level=info msg="TearDown network for sandbox \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\" successfully" Aug 12 23:54:19.597281 containerd[1436]: time="2025-08-12T23:54:19.597258694Z" level=info msg="StopPodSandbox for \"457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45\" returns successfully" Aug 12 23:54:19.598153 containerd[1436]: time="2025-08-12T23:54:19.597272614Z" level=info msg="TearDown network for sandbox \"7423762876f3ea362fc44e95b0947ed318d62357219dbcdcc8f07d50e54606e0\" successfully" Aug 12 23:54:19.598153 containerd[1436]: time="2025-08-12T23:54:19.597309414Z" level=info msg="StopPodSandbox for \"7423762876f3ea362fc44e95b0947ed318d62357219dbcdcc8f07d50e54606e0\" returns successfully" Aug 12 23:54:19.706775 kubelet[2461]: I0812 23:54:19.706565 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-hostproc\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.706775 kubelet[2461]: I0812 23:54:19.706604 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-host-proc-sys-net\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.706775 kubelet[2461]: I0812 23:54:19.706644 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a9309d4-28ce-4dd3-8a21-da55f493afc8-cilium-config-path\") pod \"9a9309d4-28ce-4dd3-8a21-da55f493afc8\" (UID: \"9a9309d4-28ce-4dd3-8a21-da55f493afc8\") " Aug 12 23:54:19.706775 kubelet[2461]: I0812 23:54:19.706667 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cilium-config-path\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.707229 kubelet[2461]: I0812 23:54:19.706682 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-lib-modules\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.707229 kubelet[2461]: I0812 23:54:19.706818 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f88c9\" (UniqueName: \"kubernetes.io/projected/9a9309d4-28ce-4dd3-8a21-da55f493afc8-kube-api-access-f88c9\") pod \"9a9309d4-28ce-4dd3-8a21-da55f493afc8\" (UID: \"9a9309d4-28ce-4dd3-8a21-da55f493afc8\") " Aug 12 23:54:19.707229 kubelet[2461]: I0812 23:54:19.706834 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-xtables-lock\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.707229 kubelet[2461]: I0812 23:54:19.706852 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cilium-cgroup\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.707229 kubelet[2461]: I0812 23:54:19.706882 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cni-path\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.707229 kubelet[2461]: I0812 23:54:19.706897 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-etc-cni-netd\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.707358 kubelet[2461]: I0812 23:54:19.706913 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2f2p\" (UniqueName: \"kubernetes.io/projected/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-kube-api-access-l2f2p\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.707358 kubelet[2461]: I0812 23:54:19.706929 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-host-proc-sys-kernel\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.707358 kubelet[2461]: I0812 23:54:19.706953 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-hubble-tls\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.707358 kubelet[2461]: I0812 23:54:19.706967 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-bpf-maps\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.707358 kubelet[2461]: I0812 23:54:19.706981 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cilium-run\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.707358 kubelet[2461]: I0812 23:54:19.707000 2461 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-clustermesh-secrets\") pod \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\" (UID: \"8a7f5e73-05b4-4d41-b8cc-16cc0429a940\") " Aug 12 23:54:19.709616 kubelet[2461]: I0812 23:54:19.709445 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-hostproc" (OuterVolumeSpecName: "hostproc") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:19.716562 kubelet[2461]: I0812 23:54:19.716511 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a9309d4-28ce-4dd3-8a21-da55f493afc8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a9309d4-28ce-4dd3-8a21-da55f493afc8" (UID: "9a9309d4-28ce-4dd3-8a21-da55f493afc8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 12 23:54:19.716662 kubelet[2461]: I0812 23:54:19.716581 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:19.716662 kubelet[2461]: I0812 23:54:19.716615 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:19.716662 kubelet[2461]: I0812 23:54:19.716635 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:19.716662 kubelet[2461]: I0812 23:54:19.716650 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:19.717459 kubelet[2461]: I0812 23:54:19.717417 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:19.717553 kubelet[2461]: I0812 23:54:19.717464 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:19.717553 kubelet[2461]: I0812 23:54:19.717487 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:19.717699 kubelet[2461]: I0812 23:54:19.709445 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cni-path" (OuterVolumeSpecName: "cni-path") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:19.717699 kubelet[2461]: I0812 23:54:19.709445 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:19.718821 kubelet[2461]: I0812 23:54:19.718769 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 12 23:54:19.719603 kubelet[2461]: I0812 23:54:19.719571 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9309d4-28ce-4dd3-8a21-da55f493afc8-kube-api-access-f88c9" (OuterVolumeSpecName: "kube-api-access-f88c9") pod "9a9309d4-28ce-4dd3-8a21-da55f493afc8" (UID: "9a9309d4-28ce-4dd3-8a21-da55f493afc8"). InnerVolumeSpecName "kube-api-access-f88c9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 12 23:54:19.719959 kubelet[2461]: I0812 23:54:19.719932 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 12 23:54:19.720909 kubelet[2461]: I0812 23:54:19.720868 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 12 23:54:19.721334 kubelet[2461]: I0812 23:54:19.721307 2461 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-kube-api-access-l2f2p" (OuterVolumeSpecName: "kube-api-access-l2f2p") pod "8a7f5e73-05b4-4d41-b8cc-16cc0429a940" (UID: "8a7f5e73-05b4-4d41-b8cc-16cc0429a940"). InnerVolumeSpecName "kube-api-access-l2f2p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 12 23:54:19.807932 kubelet[2461]: I0812 23:54:19.807862 2461 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.807932 kubelet[2461]: I0812 23:54:19.807899 2461 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a9309d4-28ce-4dd3-8a21-da55f493afc8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.807932 kubelet[2461]: I0812 23:54:19.807907 2461 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.807932 kubelet[2461]: I0812 23:54:19.807916 2461 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.807932 kubelet[2461]: I0812 23:54:19.807924 2461 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f88c9\" (UniqueName: \"kubernetes.io/projected/9a9309d4-28ce-4dd3-8a21-da55f493afc8-kube-api-access-f88c9\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.807932 kubelet[2461]: I0812 23:54:19.807933 2461 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.807932 kubelet[2461]: I0812 23:54:19.807941 2461 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.807932 kubelet[2461]: I0812 23:54:19.807950 2461 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.808233 kubelet[2461]: I0812 23:54:19.807958 2461 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.808233 kubelet[2461]: I0812 23:54:19.807968 2461 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l2f2p\" (UniqueName: \"kubernetes.io/projected/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-kube-api-access-l2f2p\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.808233 kubelet[2461]: I0812 23:54:19.807976 2461 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.808233 kubelet[2461]: I0812 23:54:19.807983 2461 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.808233 kubelet[2461]: I0812 23:54:19.807991 2461 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.808233 kubelet[2461]: I0812 23:54:19.807998 2461 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.808233 kubelet[2461]: I0812 23:54:19.808005 2461 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.808233 kubelet[2461]: I0812 23:54:19.808013 2461 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a7f5e73-05b4-4d41-b8cc-16cc0429a940-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:19.935983 systemd[1]: Removed slice kubepods-besteffort-pod9a9309d4_28ce_4dd3_8a21_da55f493afc8.slice - libcontainer container kubepods-besteffort-pod9a9309d4_28ce_4dd3_8a21_da55f493afc8.slice. Aug 12 23:54:19.937598 kubelet[2461]: I0812 23:54:19.937501 2461 scope.go:117] "RemoveContainer" containerID="6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d" Aug 12 23:54:19.940362 containerd[1436]: time="2025-08-12T23:54:19.940299121Z" level=info msg="RemoveContainer for \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\"" Aug 12 23:54:19.942691 systemd[1]: Removed slice kubepods-burstable-pod8a7f5e73_05b4_4d41_b8cc_16cc0429a940.slice - libcontainer container kubepods-burstable-pod8a7f5e73_05b4_4d41_b8cc_16cc0429a940.slice. Aug 12 23:54:19.942783 systemd[1]: kubepods-burstable-pod8a7f5e73_05b4_4d41_b8cc_16cc0429a940.slice: Consumed 8.150s CPU time. Aug 12 23:54:19.973040 containerd[1436]: time="2025-08-12T23:54:19.972902909Z" level=info msg="RemoveContainer for \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\" returns successfully" Aug 12 23:54:19.973467 kubelet[2461]: I0812 23:54:19.973283 2461 scope.go:117] "RemoveContainer" containerID="6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d" Aug 12 23:54:19.973635 containerd[1436]: time="2025-08-12T23:54:19.973564356Z" level=error msg="ContainerStatus for \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\": not found" Aug 12 23:54:19.982400 kubelet[2461]: E0812 23:54:19.982360 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\": not found" containerID="6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d" Aug 12 23:54:19.982553 kubelet[2461]: I0812 23:54:19.982400 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d"} err="failed to get container status \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6bab9b6e938ebae7b90aed3709638bb61b3f7c4ca6a564de71d00641764abb7d\": not found" Aug 12 23:54:19.982553 kubelet[2461]: I0812 23:54:19.982444 2461 scope.go:117] "RemoveContainer" containerID="9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90" Aug 12 23:54:19.983794 containerd[1436]: time="2025-08-12T23:54:19.983748985Z" level=info msg="RemoveContainer for \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\"" Aug 12 23:54:20.012117 containerd[1436]: time="2025-08-12T23:54:20.012048444Z" level=info msg="RemoveContainer for \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\" returns successfully" Aug 12 23:54:20.012387 kubelet[2461]: I0812 23:54:20.012310 2461 scope.go:117] "RemoveContainer" containerID="3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e" Aug 12 23:54:20.013620 containerd[1436]: time="2025-08-12T23:54:20.013585860Z" level=info msg="RemoveContainer for \"3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e\"" Aug 12 23:54:20.030089 containerd[1436]: time="2025-08-12T23:54:20.029944909Z" level=info msg="RemoveContainer for \"3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e\" returns successfully" Aug 12 23:54:20.030854 kubelet[2461]: I0812 23:54:20.030304 2461 scope.go:117] "RemoveContainer" containerID="c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83" Aug 12 23:54:20.033268 containerd[1436]: time="2025-08-12T23:54:20.033213943Z" level=info msg="RemoveContainer for \"c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83\"" Aug 12 23:54:20.039542 containerd[1436]: time="2025-08-12T23:54:20.039430168Z" level=info msg="RemoveContainer for \"c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83\" returns successfully" Aug 12 23:54:20.040483 kubelet[2461]: I0812 23:54:20.039706 2461 scope.go:117] "RemoveContainer" containerID="687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba" Aug 12 23:54:20.041182 containerd[1436]: time="2025-08-12T23:54:20.041148786Z" level=info msg="RemoveContainer for \"687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba\"" Aug 12 23:54:20.044363 containerd[1436]: time="2025-08-12T23:54:20.044311538Z" level=info msg="RemoveContainer for \"687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba\" returns successfully" Aug 12 23:54:20.044720 kubelet[2461]: I0812 23:54:20.044610 2461 scope.go:117] "RemoveContainer" containerID="96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815" Aug 12 23:54:20.045749 containerd[1436]: time="2025-08-12T23:54:20.045723113Z" level=info msg="RemoveContainer for \"96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815\"" Aug 12 23:54:20.058052 containerd[1436]: time="2025-08-12T23:54:20.057989720Z" level=info msg="RemoveContainer for \"96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815\" returns successfully" Aug 12 23:54:20.058376 kubelet[2461]: I0812 23:54:20.058333 2461 scope.go:117] "RemoveContainer" containerID="9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90" Aug 12 23:54:20.058675 containerd[1436]: time="2025-08-12T23:54:20.058633247Z" level=error msg="ContainerStatus for \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\": not found" Aug 12 23:54:20.058810 kubelet[2461]: E0812 23:54:20.058788 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\": not found" containerID="9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90" Aug 12 23:54:20.058843 kubelet[2461]: I0812 23:54:20.058820 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90"} err="failed to get container status \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\": rpc error: code = NotFound desc = an error occurred when try to find container \"9504fddec279e050eba38b96264b3d7f6eee12f7af39fb24ca698c576b191c90\": not found" Aug 12 23:54:20.058876 kubelet[2461]: I0812 23:54:20.058843 2461 scope.go:117] "RemoveContainer" containerID="3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e" Aug 12 23:54:20.059074 containerd[1436]: time="2025-08-12T23:54:20.059040411Z" level=error msg="ContainerStatus for \"3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e\": not found" Aug 12 23:54:20.059226 kubelet[2461]: E0812 23:54:20.059202 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e\": not found" containerID="3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e" Aug 12 23:54:20.059261 kubelet[2461]: I0812 23:54:20.059232 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e"} err="failed to get container status \"3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b7f84cdc20b83e95ea5b4455aecd706a6aab630eca3c9164cf08f1eb793052e\": not found" Aug 12 23:54:20.059261 kubelet[2461]: I0812 23:54:20.059249 2461 scope.go:117] "RemoveContainer" containerID="c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83" Aug 12 23:54:20.063397 containerd[1436]: time="2025-08-12T23:54:20.059429375Z" level=error msg="ContainerStatus for \"c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83\": not found" Aug 12 23:54:20.063720 kubelet[2461]: E0812 23:54:20.063680 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83\": not found" containerID="c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83" Aug 12 23:54:20.063769 kubelet[2461]: I0812 23:54:20.063737 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83"} err="failed to get container status \"c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83\": rpc error: code = NotFound desc = an error occurred when try to find container \"c78dbe1ad2bf0587300f5ac2bedcb6fd86af9254f0332a30627b7c473ba90b83\": not found" Aug 12 23:54:20.063769 kubelet[2461]: I0812 23:54:20.063765 2461 scope.go:117] "RemoveContainer" containerID="687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba" Aug 12 23:54:20.064061 containerd[1436]: time="2025-08-12T23:54:20.064018023Z" level=error msg="ContainerStatus for \"687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba\": not found" Aug 12 23:54:20.064196 kubelet[2461]: E0812 23:54:20.064169 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba\": not found" containerID="687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba" Aug 12 23:54:20.064228 kubelet[2461]: I0812 23:54:20.064196 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba"} err="failed to get container status \"687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"687844e27a1cbbdb0f8f49b09f85e71c6424464d68898300f21cdcac8af3d8ba\": not found" Aug 12 23:54:20.064228 kubelet[2461]: I0812 23:54:20.064213 2461 scope.go:117] "RemoveContainer" containerID="96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815" Aug 12 23:54:20.064448 containerd[1436]: time="2025-08-12T23:54:20.064415067Z" level=error msg="ContainerStatus for \"96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815\": not found" Aug 12 23:54:20.064594 kubelet[2461]: E0812 23:54:20.064568 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815\": not found" containerID="96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815" Aug 12 23:54:20.064638 kubelet[2461]: I0812 23:54:20.064601 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815"} err="failed to get container status \"96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815\": rpc error: code = NotFound desc = an error occurred when try to find container \"96433c581478525fab9de63f98ed34c37b1f3bf70d133ea13c1ddefd22674815\": not found" Aug 12 23:54:20.366507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7423762876f3ea362fc44e95b0947ed318d62357219dbcdcc8f07d50e54606e0-rootfs.mount: Deactivated successfully. Aug 12 23:54:20.366622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-457e61a6b51a27957112f4fbe8f04ea4cdb0705b6909c7e26fe02eaca24c1f45-rootfs.mount: Deactivated successfully. Aug 12 23:54:20.366676 systemd[1]: var-lib-kubelet-pods-9a9309d4\x2d28ce\x2d4dd3\x2d8a21\x2dda55f493afc8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df88c9.mount: Deactivated successfully. Aug 12 23:54:20.366741 systemd[1]: var-lib-kubelet-pods-8a7f5e73\x2d05b4\x2d4d41\x2db8cc\x2d16cc0429a940-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl2f2p.mount: Deactivated successfully. Aug 12 23:54:20.366795 systemd[1]: var-lib-kubelet-pods-8a7f5e73\x2d05b4\x2d4d41\x2db8cc\x2d16cc0429a940-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 12 23:54:20.366849 systemd[1]: var-lib-kubelet-pods-8a7f5e73\x2d05b4\x2d4d41\x2db8cc\x2d16cc0429a940-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 12 23:54:20.684485 kubelet[2461]: I0812 23:54:20.683613 2461 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a7f5e73-05b4-4d41-b8cc-16cc0429a940" path="/var/lib/kubelet/pods/8a7f5e73-05b4-4d41-b8cc-16cc0429a940/volumes" Aug 12 23:54:20.684485 kubelet[2461]: I0812 23:54:20.684167 2461 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a9309d4-28ce-4dd3-8a21-da55f493afc8" path="/var/lib/kubelet/pods/9a9309d4-28ce-4dd3-8a21-da55f493afc8/volumes" Aug 12 23:54:20.743600 kubelet[2461]: E0812 23:54:20.743553 2461 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 12 23:54:21.112205 sshd[4115]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:21.120755 systemd[1]: sshd@22-10.0.0.10:22-10.0.0.1:42268.service: Deactivated successfully. Aug 12 23:54:21.122609 systemd[1]: session-23.scope: Deactivated successfully. Aug 12 23:54:21.123578 systemd[1]: session-23.scope: Consumed 1.390s CPU time. Aug 12 23:54:21.124878 systemd-logind[1413]: Session 23 logged out. Waiting for processes to exit. Aug 12 23:54:21.141200 systemd[1]: Started sshd@23-10.0.0.10:22-10.0.0.1:42276.service - OpenSSH per-connection server daemon (10.0.0.1:42276). Aug 12 23:54:21.142562 systemd-logind[1413]: Removed session 23. Aug 12 23:54:21.186846 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 42276 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:54:21.188414 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:21.195384 systemd-logind[1413]: New session 24 of user core. Aug 12 23:54:21.205782 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 12 23:54:22.403715 kubelet[2461]: I0812 23:54:22.402942 2461 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-12T23:54:22Z","lastTransitionTime":"2025-08-12T23:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 12 23:54:22.918871 sshd[4277]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:22.928125 systemd[1]: sshd@23-10.0.0.10:22-10.0.0.1:42276.service: Deactivated successfully. Aug 12 23:54:22.933829 systemd[1]: session-24.scope: Deactivated successfully. Aug 12 23:54:22.934012 systemd[1]: session-24.scope: Consumed 1.613s CPU time. Aug 12 23:54:22.937720 systemd-logind[1413]: Session 24 logged out. Waiting for processes to exit. Aug 12 23:54:22.951109 systemd[1]: Started sshd@24-10.0.0.10:22-10.0.0.1:60688.service - OpenSSH per-connection server daemon (10.0.0.1:60688). Aug 12 23:54:22.956765 systemd-logind[1413]: Removed session 24. Aug 12 23:54:22.972223 systemd[1]: Created slice kubepods-burstable-pod67154912_f89b_4cc5_a47b_37f187a9371c.slice - libcontainer container kubepods-burstable-pod67154912_f89b_4cc5_a47b_37f187a9371c.slice. Aug 12 23:54:23.008976 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 60688 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:54:23.010373 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:23.014844 systemd-logind[1413]: New session 25 of user core. Aug 12 23:54:23.023744 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 12 23:54:23.027022 kubelet[2461]: I0812 23:54:23.026865 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67154912-f89b-4cc5-a47b-37f187a9371c-xtables-lock\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027022 kubelet[2461]: I0812 23:54:23.026903 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/67154912-f89b-4cc5-a47b-37f187a9371c-clustermesh-secrets\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027022 kubelet[2461]: I0812 23:54:23.026920 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/67154912-f89b-4cc5-a47b-37f187a9371c-hubble-tls\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027022 kubelet[2461]: I0812 23:54:23.026934 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns65s\" (UniqueName: \"kubernetes.io/projected/67154912-f89b-4cc5-a47b-37f187a9371c-kube-api-access-ns65s\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027022 kubelet[2461]: I0812 23:54:23.026955 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67154912-f89b-4cc5-a47b-37f187a9371c-etc-cni-netd\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027022 kubelet[2461]: I0812 23:54:23.026970 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67154912-f89b-4cc5-a47b-37f187a9371c-cilium-config-path\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027245 kubelet[2461]: I0812 23:54:23.026998 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/67154912-f89b-4cc5-a47b-37f187a9371c-host-proc-sys-net\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027245 kubelet[2461]: I0812 23:54:23.027033 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/67154912-f89b-4cc5-a47b-37f187a9371c-hostproc\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027245 kubelet[2461]: I0812 23:54:23.027062 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/67154912-f89b-4cc5-a47b-37f187a9371c-cilium-cgroup\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027245 kubelet[2461]: I0812 23:54:23.027092 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/67154912-f89b-4cc5-a47b-37f187a9371c-host-proc-sys-kernel\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027245 kubelet[2461]: I0812 23:54:23.027122 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/67154912-f89b-4cc5-a47b-37f187a9371c-cilium-ipsec-secrets\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027245 kubelet[2461]: I0812 23:54:23.027149 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/67154912-f89b-4cc5-a47b-37f187a9371c-cilium-run\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027365 kubelet[2461]: I0812 23:54:23.027164 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/67154912-f89b-4cc5-a47b-37f187a9371c-bpf-maps\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027365 kubelet[2461]: I0812 23:54:23.027180 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/67154912-f89b-4cc5-a47b-37f187a9371c-cni-path\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.027365 kubelet[2461]: I0812 23:54:23.027194 2461 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67154912-f89b-4cc5-a47b-37f187a9371c-lib-modules\") pod \"cilium-kj44d\" (UID: \"67154912-f89b-4cc5-a47b-37f187a9371c\") " pod="kube-system/cilium-kj44d" Aug 12 23:54:23.073669 sshd[4290]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:23.085449 systemd[1]: sshd@24-10.0.0.10:22-10.0.0.1:60688.service: Deactivated successfully. Aug 12 23:54:23.087314 systemd[1]: session-25.scope: Deactivated successfully. Aug 12 23:54:23.089190 systemd-logind[1413]: Session 25 logged out. Waiting for processes to exit. Aug 12 23:54:23.101461 systemd[1]: Started sshd@25-10.0.0.10:22-10.0.0.1:60694.service - OpenSSH per-connection server daemon (10.0.0.1:60694). Aug 12 23:54:23.102107 systemd-logind[1413]: Removed session 25. Aug 12 23:54:23.139007 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 60694 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:54:23.140548 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:23.149616 systemd-logind[1413]: New session 26 of user core. Aug 12 23:54:23.156702 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 12 23:54:23.277234 kubelet[2461]: E0812 23:54:23.277097 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:23.279270 containerd[1436]: time="2025-08-12T23:54:23.279174407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kj44d,Uid:67154912-f89b-4cc5-a47b-37f187a9371c,Namespace:kube-system,Attempt:0,}" Aug 12 23:54:23.299419 containerd[1436]: time="2025-08-12T23:54:23.299319157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:54:23.299419 containerd[1436]: time="2025-08-12T23:54:23.299377318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:54:23.299419 containerd[1436]: time="2025-08-12T23:54:23.299392838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:54:23.299686 containerd[1436]: time="2025-08-12T23:54:23.299649400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:54:23.320770 systemd[1]: Started cri-containerd-f4b445cba15a2c7c50006f732436c2e633ad306e3f70bdf092269ec58cf89ab4.scope - libcontainer container f4b445cba15a2c7c50006f732436c2e633ad306e3f70bdf092269ec58cf89ab4. Aug 12 23:54:23.341111 containerd[1436]: time="2025-08-12T23:54:23.341049032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kj44d,Uid:67154912-f89b-4cc5-a47b-37f187a9371c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4b445cba15a2c7c50006f732436c2e633ad306e3f70bdf092269ec58cf89ab4\"" Aug 12 23:54:23.342223 kubelet[2461]: E0812 23:54:23.341866 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:23.350379 containerd[1436]: time="2025-08-12T23:54:23.350333079Z" level=info msg="CreateContainer within sandbox \"f4b445cba15a2c7c50006f732436c2e633ad306e3f70bdf092269ec58cf89ab4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:54:23.376761 containerd[1436]: time="2025-08-12T23:54:23.376701249Z" level=info msg="CreateContainer within sandbox \"f4b445cba15a2c7c50006f732436c2e633ad306e3f70bdf092269ec58cf89ab4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"74397290042fca1c90402ce889834c415f719a8109de30bea6e377ef57fdad69\"" Aug 12 23:54:23.377555 containerd[1436]: time="2025-08-12T23:54:23.377435416Z" level=info msg="StartContainer for \"74397290042fca1c90402ce889834c415f719a8109de30bea6e377ef57fdad69\"" Aug 12 23:54:23.408764 systemd[1]: Started cri-containerd-74397290042fca1c90402ce889834c415f719a8109de30bea6e377ef57fdad69.scope - libcontainer container 74397290042fca1c90402ce889834c415f719a8109de30bea6e377ef57fdad69. Aug 12 23:54:23.436111 containerd[1436]: time="2025-08-12T23:54:23.436053450Z" level=info msg="StartContainer for \"74397290042fca1c90402ce889834c415f719a8109de30bea6e377ef57fdad69\" returns successfully" Aug 12 23:54:23.460187 systemd[1]: cri-containerd-74397290042fca1c90402ce889834c415f719a8109de30bea6e377ef57fdad69.scope: Deactivated successfully. Aug 12 23:54:23.500893 containerd[1436]: time="2025-08-12T23:54:23.500825182Z" level=info msg="shim disconnected" id=74397290042fca1c90402ce889834c415f719a8109de30bea6e377ef57fdad69 namespace=k8s.io Aug 12 23:54:23.500893 containerd[1436]: time="2025-08-12T23:54:23.500879903Z" level=warning msg="cleaning up after shim disconnected" id=74397290042fca1c90402ce889834c415f719a8109de30bea6e377ef57fdad69 namespace=k8s.io Aug 12 23:54:23.500893 containerd[1436]: time="2025-08-12T23:54:23.500890983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:23.951082 kubelet[2461]: E0812 23:54:23.951046 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:23.957853 containerd[1436]: time="2025-08-12T23:54:23.957731782Z" level=info msg="CreateContainer within sandbox \"f4b445cba15a2c7c50006f732436c2e633ad306e3f70bdf092269ec58cf89ab4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 12 23:54:23.978492 containerd[1436]: time="2025-08-12T23:54:23.978430578Z" level=info msg="CreateContainer within sandbox \"f4b445cba15a2c7c50006f732436c2e633ad306e3f70bdf092269ec58cf89ab4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"070b900893b790d3a94373ae9f8e2b5c3765bc01cdada560e16c581df8aed3dc\"" Aug 12 23:54:23.978973 containerd[1436]: time="2025-08-12T23:54:23.978942343Z" level=info msg="StartContainer for \"070b900893b790d3a94373ae9f8e2b5c3765bc01cdada560e16c581df8aed3dc\"" Aug 12 23:54:24.006757 systemd[1]: Started cri-containerd-070b900893b790d3a94373ae9f8e2b5c3765bc01cdada560e16c581df8aed3dc.scope - libcontainer container 070b900893b790d3a94373ae9f8e2b5c3765bc01cdada560e16c581df8aed3dc. Aug 12 23:54:24.029264 containerd[1436]: time="2025-08-12T23:54:24.028733085Z" level=info msg="StartContainer for \"070b900893b790d3a94373ae9f8e2b5c3765bc01cdada560e16c581df8aed3dc\" returns successfully" Aug 12 23:54:24.039922 systemd[1]: cri-containerd-070b900893b790d3a94373ae9f8e2b5c3765bc01cdada560e16c581df8aed3dc.scope: Deactivated successfully. Aug 12 23:54:24.076346 containerd[1436]: time="2025-08-12T23:54:24.076268001Z" level=info msg="shim disconnected" id=070b900893b790d3a94373ae9f8e2b5c3765bc01cdada560e16c581df8aed3dc namespace=k8s.io Aug 12 23:54:24.076346 containerd[1436]: time="2025-08-12T23:54:24.076338642Z" level=warning msg="cleaning up after shim disconnected" id=070b900893b790d3a94373ae9f8e2b5c3765bc01cdada560e16c581df8aed3dc namespace=k8s.io Aug 12 23:54:24.076346 containerd[1436]: time="2025-08-12T23:54:24.076348802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:24.961065 kubelet[2461]: E0812 23:54:24.961029 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:24.969153 containerd[1436]: time="2025-08-12T23:54:24.969070347Z" level=info msg="CreateContainer within sandbox \"f4b445cba15a2c7c50006f732436c2e633ad306e3f70bdf092269ec58cf89ab4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 12 23:54:24.998368 containerd[1436]: time="2025-08-12T23:54:24.998299135Z" level=info msg="CreateContainer within sandbox \"f4b445cba15a2c7c50006f732436c2e633ad306e3f70bdf092269ec58cf89ab4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c630e36a48c6ab55b7bd98b99b82db1dbccda06660a769f0704460c163a0d1d6\"" Aug 12 23:54:24.999063 containerd[1436]: time="2025-08-12T23:54:24.999031622Z" level=info msg="StartContainer for \"c630e36a48c6ab55b7bd98b99b82db1dbccda06660a769f0704460c163a0d1d6\"" Aug 12 23:54:25.039759 systemd[1]: Started cri-containerd-c630e36a48c6ab55b7bd98b99b82db1dbccda06660a769f0704460c163a0d1d6.scope - libcontainer container c630e36a48c6ab55b7bd98b99b82db1dbccda06660a769f0704460c163a0d1d6. Aug 12 23:54:25.067689 systemd[1]: cri-containerd-c630e36a48c6ab55b7bd98b99b82db1dbccda06660a769f0704460c163a0d1d6.scope: Deactivated successfully. Aug 12 23:54:25.068816 containerd[1436]: time="2025-08-12T23:54:25.067872515Z" level=info msg="StartContainer for \"c630e36a48c6ab55b7bd98b99b82db1dbccda06660a769f0704460c163a0d1d6\" returns successfully" Aug 12 23:54:25.088206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c630e36a48c6ab55b7bd98b99b82db1dbccda06660a769f0704460c163a0d1d6-rootfs.mount: Deactivated successfully. Aug 12 23:54:25.096988 containerd[1436]: time="2025-08-12T23:54:25.096933774Z" level=info msg="shim disconnected" id=c630e36a48c6ab55b7bd98b99b82db1dbccda06660a769f0704460c163a0d1d6 namespace=k8s.io Aug 12 23:54:25.096988 containerd[1436]: time="2025-08-12T23:54:25.096988014Z" level=warning msg="cleaning up after shim disconnected" id=c630e36a48c6ab55b7bd98b99b82db1dbccda06660a769f0704460c163a0d1d6 namespace=k8s.io Aug 12 23:54:25.097197 containerd[1436]: time="2025-08-12T23:54:25.096998894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:25.684280 kubelet[2461]: E0812 23:54:25.682236 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:25.744437 kubelet[2461]: E0812 23:54:25.744353 2461 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 12 23:54:25.964569 kubelet[2461]: E0812 23:54:25.964015 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:26.026141 containerd[1436]: time="2025-08-12T23:54:26.026091150Z" level=info msg="CreateContainer within sandbox \"f4b445cba15a2c7c50006f732436c2e633ad306e3f70bdf092269ec58cf89ab4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 12 23:54:26.141977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886822495.mount: Deactivated successfully. Aug 12 23:54:26.150420 containerd[1436]: time="2025-08-12T23:54:26.150267901Z" level=info msg="CreateContainer within sandbox \"f4b445cba15a2c7c50006f732436c2e633ad306e3f70bdf092269ec58cf89ab4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"722d35f90081da0fcaffd7b097f10ed1d29d9d9251a7dc985031df6e788db741\"" Aug 12 23:54:26.150943 containerd[1436]: time="2025-08-12T23:54:26.150918347Z" level=info msg="StartContainer for \"722d35f90081da0fcaffd7b097f10ed1d29d9d9251a7dc985031df6e788db741\"" Aug 12 23:54:26.182801 systemd[1]: Started cri-containerd-722d35f90081da0fcaffd7b097f10ed1d29d9d9251a7dc985031df6e788db741.scope - libcontainer container 722d35f90081da0fcaffd7b097f10ed1d29d9d9251a7dc985031df6e788db741. Aug 12 23:54:26.212090 systemd[1]: cri-containerd-722d35f90081da0fcaffd7b097f10ed1d29d9d9251a7dc985031df6e788db741.scope: Deactivated successfully. Aug 12 23:54:26.214103 containerd[1436]: time="2025-08-12T23:54:26.214060052Z" level=info msg="StartContainer for \"722d35f90081da0fcaffd7b097f10ed1d29d9d9251a7dc985031df6e788db741\" returns successfully" Aug 12 23:54:26.234799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-722d35f90081da0fcaffd7b097f10ed1d29d9d9251a7dc985031df6e788db741-rootfs.mount: Deactivated successfully. Aug 12 23:54:26.250241 containerd[1436]: time="2025-08-12T23:54:26.250169163Z" level=info msg="shim disconnected" id=722d35f90081da0fcaffd7b097f10ed1d29d9d9251a7dc985031df6e788db741 namespace=k8s.io Aug 12 23:54:26.250241 containerd[1436]: time="2025-08-12T23:54:26.250229204Z" level=warning msg="cleaning up after shim disconnected" id=722d35f90081da0fcaffd7b097f10ed1d29d9d9251a7dc985031df6e788db741 namespace=k8s.io Aug 12 23:54:26.250241 containerd[1436]: time="2025-08-12T23:54:26.250239884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:26.968295 kubelet[2461]: E0812 23:54:26.968226 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:26.976502 containerd[1436]: time="2025-08-12T23:54:26.976447067Z" level=info msg="CreateContainer within sandbox \"f4b445cba15a2c7c50006f732436c2e633ad306e3f70bdf092269ec58cf89ab4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 12 23:54:26.988946 containerd[1436]: time="2025-08-12T23:54:26.988885295Z" level=info msg="CreateContainer within sandbox \"f4b445cba15a2c7c50006f732436c2e633ad306e3f70bdf092269ec58cf89ab4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e56edea2fbe6f0d5de51c0ebf8fd2b7b77979c47beded060f1c38218adbc86c2\"" Aug 12 23:54:26.991086 containerd[1436]: time="2025-08-12T23:54:26.990999313Z" level=info msg="StartContainer for \"e56edea2fbe6f0d5de51c0ebf8fd2b7b77979c47beded060f1c38218adbc86c2\"" Aug 12 23:54:27.022289 systemd[1]: Started cri-containerd-e56edea2fbe6f0d5de51c0ebf8fd2b7b77979c47beded060f1c38218adbc86c2.scope - libcontainer container e56edea2fbe6f0d5de51c0ebf8fd2b7b77979c47beded060f1c38218adbc86c2. Aug 12 23:54:27.075028 containerd[1436]: time="2025-08-12T23:54:27.074859657Z" level=info msg="StartContainer for \"e56edea2fbe6f0d5de51c0ebf8fd2b7b77979c47beded060f1c38218adbc86c2\" returns successfully" Aug 12 23:54:27.371609 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Aug 12 23:54:27.973670 kubelet[2461]: E0812 23:54:27.972846 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:29.279394 kubelet[2461]: E0812 23:54:29.279335 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:29.521489 systemd[1]: run-containerd-runc-k8s.io-e56edea2fbe6f0d5de51c0ebf8fd2b7b77979c47beded060f1c38218adbc86c2-runc.Yj4GEq.mount: Deactivated successfully. Aug 12 23:54:30.409722 systemd-networkd[1372]: lxc_health: Link UP Aug 12 23:54:30.421007 systemd-networkd[1372]: lxc_health: Gained carrier Aug 12 23:54:31.283036 kubelet[2461]: E0812 23:54:31.282126 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:31.316637 kubelet[2461]: I0812 23:54:31.315433 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kj44d" podStartSLOduration=9.315416997 podStartE2EDuration="9.315416997s" podCreationTimestamp="2025-08-12 23:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:54:28.004571514 +0000 UTC m=+87.425758776" watchObservedRunningTime="2025-08-12 23:54:31.315416997 +0000 UTC m=+90.736604219" Aug 12 23:54:31.664206 systemd[1]: run-containerd-runc-k8s.io-e56edea2fbe6f0d5de51c0ebf8fd2b7b77979c47beded060f1c38218adbc86c2-runc.k8QWAK.mount: Deactivated successfully. Aug 12 23:54:31.982822 kubelet[2461]: E0812 23:54:31.981374 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:32.323692 systemd-networkd[1372]: lxc_health: Gained IPv6LL Aug 12 23:54:32.682196 kubelet[2461]: E0812 23:54:32.682149 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:35.999065 systemd[1]: run-containerd-runc-k8s.io-e56edea2fbe6f0d5de51c0ebf8fd2b7b77979c47beded060f1c38218adbc86c2-runc.gwSk7R.mount: Deactivated successfully. Aug 12 23:54:36.058387 sshd[4298]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:36.061732 systemd[1]: sshd@25-10.0.0.10:22-10.0.0.1:60694.service: Deactivated successfully. Aug 12 23:54:36.063734 systemd[1]: session-26.scope: Deactivated successfully. Aug 12 23:54:36.065539 systemd-logind[1413]: Session 26 logged out. Waiting for processes to exit. Aug 12 23:54:36.066708 systemd-logind[1413]: Removed session 26.